Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

4Pockets Harmonic Exciter.

«1

Comments

  • It’s not out yet by the way. 4Pockets is a Machine can’t believe he banged another effect out.

  • Very cool! Reminds me of the one I really liked on my Yamaha A3000 back in the day.

  • He still needs to Fix Stompbox

  • Ever since the Licensed Aphex Aural Exciter present in my Yamaha SY85 I've always loved these kind of effects :)
    Hmmm, the list of apps to get is starting to grow again arghhhhhhhh.....

  • Seems 4pockets makes good stuff! 👍
    This Harmonic Exiter can be quite useful and is definitely on my shopping list... 😊

  • edited April 2019

    @AudioGus said:
    Very cool! Reminds me of the one I really liked on my Yamaha A3000 back in the day.

    zipdrivesforever

  • edited April 2019

    @brice said:

    @AudioGus said:
    Very cool! Reminds me of the one I really liked on my Yamaha A3000 back in the day.

    zipdrivesforever

    I skipped the zip on the A3000 and went straight External SCSI hard drive baby! #BALLAH

  • @Samu said:
    Ever since the Licensed Aphex Aural Exciter present in my Yamaha SY85 I've always loved these kind of effects :)
    Hmmm, the list of apps to get is starting to grow again

    Did you ever figure out how to load samples into the SY85? It was a bit of a noggle!

  • @Telstar5 said:

    Did you ever figure out how to load samples into the SY85? It was a bit of a noggle!

    Oh, yes, I got the 'sample file headers' directly from Yamaha and hacked an app on the Amiga to add the headers to raw samples and/or generated files so I could load them from a floppy. Later I found a midi sample dump program on a FredFish disk that made things a bit easier...

    One of the 'samples' that drove people nuts in the music store from where I purchased it was a clean 16-bit saw-wave (ie. a 64KB file with all values from 0-65535). The level of the file was so high it distorted the internal filters for some screaming TB-303 like sounds :) (Ie. there was no need to use a distortion effect to make it scream, just distort the filter).

    To be honest one of the deciding factors when I got my first 'real' synth (the SY85) was the ability to load samples.
    I even expanded the volatile memory to 1.5MB. Never got hold of the battery backed ram for it...

  • Does anyone have more information on the dev’s comment regarding offloading computational tasks to Apple “custom chips” and rendering to GPU to optimize CPU load?
    Is this something usual or other plugins are using the same technique?
    .

  • SIMD vectorization maybe (a form of parallel computation for math heavy tasks)? Otherwise not sure what he would be referring to there.

  • What plugin were you discussing?
    Maybe that Analyser app?

  • @CracklePot said:
    What plugin were you discussing?
    Maybe that Analyser app?

    The Harmonic Exciter

  • @AudioGus said:

    @brice said:

    @AudioGus said:
    Very cool! Reminds me of the one I really liked on my Yamaha A3000 back in the day.

    zipdrivesforever

    I skipped the zip on the A3000 and went straight External SCSI hard drive baby! #BALLAH

    Did LOL

  • @Rodolfo said:

    @CracklePot said:
    What plugin were you discussing?
    Maybe that Analyser app?

    The Harmonic Exciter

    Oh. It has a pretty active waveform display and spectrum view.
    Maybe that is using the GPU?

  • I could get excited by this app! 🤣

  • @CracklePot said:

    @Rodolfo said:

    @CracklePot said:
    What plugin were you discussing?
    Maybe that Analyser app?

    The Harmonic Exciter

    Oh. It has a pretty active waveform display and spectrum view.
    Maybe that is using the GPU?

    That's sounding the most likely area, as GPU utilization on audio processes introduces too much latency.

    I remember a couple of developers attempting to use Nvidia CUDA on the desktop (Acustica and LiquidSonics) for their convolution products but they've pulled away from it in recent years. The logic was that convolution processing can make use of massively parallel processing for audio rendering in much the same way that GPU 3d render engines work, but it's absolutely useless at real-time audio. And music creation using computer music techniques is all about 'real-time' (well real-time within the confines of the sample buffer and plugin latency overhead).

    I haven't seen GPU processing discussed all that much with regard to audio plugins/apps in recent years apart from for offloading graphics routines. e.g. the open source modular VCV Rack uses OpenGL to offload graphics routines.

  • I wonder why audio apps with a busy GUI were already possible on a slow iPad 1 with single-core CPU while on the insanely much faster current multi-core CPUs we suddenly need a GPU to handle that??

  • Is this necessary if you already have shimmer?

  • @gregsmith said:
    Is this necessary if you already have shimmer?

    I'm not sure. 4Pockets Shimmer isn't Valhalla Shimmer and Harmonic Exciter isn't BBE hardware (it's closer to the less than satisfactory Nomad plugins the license the BBE brand name and they rely more on EQ curves than true harmonic excitement).

  • Is this out?

    I can't find it on the store?

  • edited April 2019

    @rs2000 said:
    I wonder why audio apps with a busy GUI were already possible on a slow iPad 1 with single-core CPU while on the insanely much faster current multi-core CPUs we suddenly need a GPU to handle that??

    It depends on the specifics of the visualisation. If it's shifting lots of vectors around the screen that's a job for a GPU and iPads have always had GPU's. But harnessing a GPU is a very different set of programming chops than audio DSP so many audio apps brute force those graphics routines with C++ (Objective C or Swift too) rather than OpenGL on the older iPads or Metal 2 on newer iOS hardware.

  • @jonmoore said:

    @rs2000 said:
    I wonder why audio apps with a busy GUI were already possible on a slow iPad 1 with single-core CPU while on the insanely much faster current multi-core CPUs we suddenly need a GPU to handle that??

    It depends on the specifics of the visualisation. If it's shifting lots of vectors around the screen that's a job for a GPU and iPads have always had GPU's. But harnessing a GPU is a very different set of programming chops than audio DSP so many audio apps brute force those graphics routines with C++ (Objective C or Swift too) rather than OpenGL on the older iPads or Metal 2 on newer iOS hardware.

    A-ha. So the older apps must have used OpenGL most likely?

  • @rs2000 said:

    A-ha. So the older apps must have used OpenGL most likely?

    If they were well programmed and required GPU grunt to work. Most audio apps require very little when it comes to shifting pixels around the screen, but apps such as 4Pockets visualiser will work far more effectively when utilising the GPU.

    I remember when Poseidon was introduced to iOS, it had a warning in the preferences that stated showing the cursor scanning across the 3d plot required a lot of CPU and that's a relatively simple visualisation. It doesn't cause problems on modern iOS devices and that's probably simply a case that modern devices have far more firepower but maybe, Harry worked out how to offload that task to the GPU.

  • It occurred to me that he may have offloaded a lot of the calculations required to do the audio analysis for the visualizer to the GPU (FFT for the DSP inclined). Traditionally those are quite heavy calculations, so utilizing the GPU (which can do them very efficiently) would be smart.

  • @cian said:
    It occurred to me that he may have offloaded a lot of the calculations required to do the audio analysis for the visualizer to the GPU (FFT for the DSP inclined). Traditionally those are quite heavy calculations, so utilizing the GPU (which can do them very efficiently) would be smart.

    FFT is not necessarily CPU-heavy, depends on the precision, on the window length and the windowing function too.
    I don't think that you can offload DSP calculations to the GPU on iOS.

  • @rs2000 said:

    @cian said:
    It occurred to me that he may have offloaded a lot of the calculations required to do the audio analysis for the visualizer to the GPU (FFT for the DSP inclined). Traditionally those are quite heavy calculations, so utilizing the GPU (which can do them very efficiently) would be smart.

    FFT is not necessarily CPU-heavy, depends on the precision, on the window length and the windowing function too.
    I don't think that you can offload DSP calculations to the GPU on iOS.

    AFAIK that's correct. The GPU is only useful when it comes to actual pixel pushing.

    For anybody interested, there were a few threads of conversation with the developer of VCV Rack (on the VCV forum) where he discusses how he utilizes GPU grunt.

  • @jonmoore said:

    @rs2000 said:

    @cian said:
    It occurred to me that he may have offloaded a lot of the calculations required to do the audio analysis for the visualizer to the GPU (FFT for the DSP inclined). Traditionally those are quite heavy calculations, so utilizing the GPU (which can do them very efficiently) would be smart.

    FFT is not necessarily CPU-heavy, depends on the precision, on the window length and the windowing function too.
    I don't think that you can offload DSP calculations to the GPU on iOS.

    AFAIK that's correct. The GPU is only useful when it comes to actual pixel pushing.

    This isn't correct. You can do calculations on the GPU, and this is used a lot now by scientists (Your GPU is basically a super computer). The problem is that getting data off the GPU is pretty inefficient, so it (mostly) doesn't make sense for things like real-time audio.

    In this the sole purpose of the FFT is for visualization purposes, in which case doing it on the GPU could make a lot of sense. You'd do the calculations, then transform them into something visual.

  • @cian said:

    @jonmoore said:

    @rs2000 said:

    @cian said:
    It occurred to me that he may have offloaded a lot of the calculations required to do the audio analysis for the visualizer to the GPU (FFT for the DSP inclined). Traditionally those are quite heavy calculations, so utilizing the GPU (which can do them very efficiently) would be smart.

    FFT is not necessarily CPU-heavy, depends on the precision, on the window length and the windowing function too.
    I don't think that you can offload DSP calculations to the GPU on iOS.

    AFAIK that's correct. The GPU is only useful when it comes to actual pixel pushing.

    This isn't correct. You can do calculations on the GPU, and this is used a lot now by scientists (Your GPU is basically a super computer). The problem is that getting data off the GPU is pretty inefficient, so it (mostly) doesn't make sense for things like real-time audio.

    In this the sole purpose of the FFT is for visualization purposes, in which case doing it on the GPU could make a lot of sense. You'd do the calculations, then transform them into something visual.

    It's still not anything that's usable at a consumer electronics level.

    My other business is a visual design business and I've set up GPU render farms for a number of clients. My own workstation happens to have three Nvidia 1080 Ti's which I use for CUDA powered rendering with Redshift, V-Ray and occasionally Octane. It's true that GPU's COULD be the future of audio processing but the problem with current technology for real-time audio processing is buffering the data between the main CPU and the GPU.

    The future promises to do away with the buffering logjam and Nvidia already have the 'future technology' in the marketplace - Tesla Accelerators. This technology enables you to use multiple GPU's together as one and RAM is shared between all three (another problem with current consumer technology). But this technology doesn't come cheap. We're talking close to $4k for a single high-end card.

    The biggest problem in all of this is knowledge. Programming for CUDA is hard, it makes audio DSP look like a walk in the park. But I'm sure in time we'll see a new generation of programmers with the right chops and the technology which currently requires massive cooling units, will be shrunk to fit in an iPad - but that's at least 5 years away!

    The Pro Tools blog did a good post looking at this possible future a little while back.

    https://www.pro-tools-expert.com/production-expert-1/2018/4/1/graphics-cards-why-the-future-of-sound-could-be-pictures

Sign In or Register to comment.