Audiobus: Use your music apps together.
What is Audiobus? — Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.
Download on the App StoreAudiobus is the app that makes the rest of your setup better.
Four common mistakes in audio development
I've just published an article I've been working on this week which covers four common mistakes that developers of audio apps make. These are a common cause of glitching and clicking in audio apps. It also introduces a tool for developers that helps them track down related problems in their code.
It's mainly for developers, but it's written in a way that should be more-or-less accessible to anyone willing to skip past the code examples
Four Common Mistakes in Audio Development
This is a discussion of four common mistakes that audio developers make, how to do better, and how to detect whether there’s a problem. It’s written primarily for developers, but should be accessible to non-developers too. I introduce Realtime Watchdog, a diagnostic tool for developers, and provide a brief survey of popular audio libraries.
Comments
Currently #1 on Hacker News!
Sweet!
Excellent article, thanks for writing and sharing!
I've got to say, I love how supportive devs are within iOS music making community
@Michael
I am a life long developer (30+ years), not in audio unfortunately, I'd just like to say that having people like you in the world, sharing their learnings and insights, makes a developers life so much more enjoyable.
Thank you so much That's lovely!
Fabulous thanks!
I enjoyed the arrow shooting CPU. Not sure what the rest was about though. Literally.
Excellent article! Thanks for writing it @Michael
Wow @Michael ...That's clever talk...All I got is that frustration of having such power in my hands with an Ipad , and hundred of wires from the last 25 years now unused...But still have f***** glitches every two minutes...
I also see that C++ is the most accurate language since 1983...A bit like midi protocol...
I go eat my bananas now...
@Michael thank you for a glimpse into the challenges of coding audio for iOS. We all benefit from the hardwork and collaboration/cooperation among developers which much like their coding goes unseen and under appreciated.
Do you think the emphasis on Swift coding at WWDC will lead to a flood of audio apps with glitching or is Apple likely to address these issues with Swift?
@InfoCheck Thanks for the kind words! I do worry about Swift, yeah; it's partly what prompted me to write the article. The Swift team are aware of the limitations with respect to audio, but I doubt that it's going to be addressed. There's a little bit of leadership from the audio team vs. these issues (Doug Wyatt of the Core Audio team explicitly mentioned them in a recent talk), but it's quite easily missed.
As it stands there appear to be many misconceptions about Swift from developers who should really know better - I've had several developers over the last day tell me surely I'm wrong and that Swift is fine. Of course, we have it direct from the horse's mouth that it isn't...
Good article1 I also found the HN dscussion about it very interesting, lots of game audio devs chiming in. Wish I was allowed to seriously join in, but I know too much about our game console audio (having written significant portions).
I "invented" (making stuff up as I went along, really, had no formal education for it) circular non-locking buffers back in 76', but called them "escalators". Stream ISR to/from packet assembler and router in middle ground. I used only three priority levels in those embedded days - ISR (very quick in-n-out), Middleground (more detailed event handling, but interruptble by OISRs), and one while (true) loop scheduling everything else. Very efficient. Had to be, with only 4KB ROM and 4KB RAM. As a result I hate the thread model. Too floppy and indeterminate. Event and data flow stuff typically has very short handlers, no need to subject them to pre-emption beyond the ISR for the hardware. Or try and run them in parallel. The Hard Real Time contract has to be honored either way, and stackless run to completion FSM functions have extremely low overhead, and deterministic too. More time for the working code.
Great article Michael. Not just the info, the writing and the pacing. I'll probably never need the code samples but I'm definitely going to reuse the high horse paragraph. Delicate, brilliant. That's quite good company you keep according to the acknowledgements!
Anecdotally... I don't know anything about it really, but I know that NanoStudio uses just enough Obj-C to get it to compile for iOS and no more. Application code is written in C++ (and vanilla C) and pretty much everything that has to do with audio is written in assembler. 15 two OSC synths with 3 ENVs, 4 LFOs and 6 effects per track and the app never ever glitches on me. A marvel.
I wonder though, short of something totally exploding, if you're a fairly new developer (or audio developer) if there isn't some value in using simpler libraries/languages in order to get a working (hopefully) product out the door and into the hands of customers faster. Optimizing code that no one cares about is teh sux.
@dwarman what game system are you referring to?
Michael, if you're ever in the Pacific Northwest of the US, I'd love to buy the beer and listen to you and David (dwarman) rap!
Surely this is a case for actually bringing more hardware into the iPad. It really should have a real time synthesiser sitting there in hardware, in actual transistors, that can be controlled by software but not interrupted by it. If that’s not possible, then maybe it can be distilled down to a analogue ramp generator that keeps going up, and when it approaches the maximum excursion eventually reverses unless the slope degree is altered or the direction is switched under software control. That way all the software has to do is to tell it to make incremental adjustments in slope degree, and direction, and there you are — a non-digital non-quantised (much) computer controllable waveform outputting thing. Do that four times, and that’ll take care of four channel immersive phase-coherent Quasonama for VR audio. There won’t be glitches as if the audio instructions run out, it carries on making the same wave ramping slope characteristics as it was last time it was told to adjust. I wonder if this could be implemented in the GPU?
Cheers, guys
I'd prefer to see folks just getting into the right habits - it isn't harder to do it properly the first time. It is a bit hard to move from something that's riddled with nasties to clean code, but to get it right first up, with the right tools, is pretty straightforward. I agree about premature optimization, but I don't think this is it.
Done!
Hmm...I can't imagine Apple ever doing something like that They're into pro audio, but not that into it.
When I was way more young I earned good bucks programming (Pascal, then vanilla C a la K&R). I specialized in designing graphical interfaces (for DOS) to manage real time data flows (medical, etc). Hard work, very very tight schedules and no weekends. Say it again, I was young and slept very little, so I also managed to have some fun in life.
Since those times I've developed a deep repulsion for writing code. Now even Max repels me a bit. The article is good, though, and my personal preferences hold only for me, needless to say
@Michael in reading a forum post I ran across a developer who talked about switching from C++ coding to Objective C, so I referred him to your blog post and mentioned the glitching problems which may arise.
From time to time there are developers on the forum asking developer type questions so perhaps it might be a good idea to add a stickie on the forum about the Audiobus Developer Center so they're more likely to receive appropriate information for those questions?
@syrupcore - you up here too? Where? Could rap and jam even if @Michael can't make it (though that would be great too).
Nintendo (specifically NTD, not NOA), audio engine and DSP sdk for WiiU and all points N.
Sounds like you burned out early!
Sure, could be - I've forwarded onto Mr Forummaster McForumface himself
Not early, I'm not so young. That was my start, I've worked with C++ and Java too. Then I discovered video games and other ways to be creative. Never came back.
You didn't happen to also work with Paradox did you @zarv - on the medical data flow stuff? Just curious
And, if you can believe it, K&R (pre-ANSI) C is still in use in some places! Yup - just ported some recently 10s of thousands of lines of (someone else's!) impenetrable code that's too hairy to convert to ANSI C. (1 letter variable names, no comments! Lots of G*TOs !!! Ugh! - but - it's been working for a decade so, leaving best alone! ).
Then of course there was programming DOS in C with hooks into assembler for games with extended memory. Taught a class about that back in the day
I'm still programming though... But I understand how you moved on too!
...
I know, I know, K&R was like Commodore 64, an everlasting love for many. Besides, their book was really wonderful for those times, and not only. I remember Paradox, but I never used it, though I started programming (BASIC on the ZX Spectrum aside) with Borland Pascal.
The hardware side of our team (actually, a duo) was an engineer that took care of data management, by Assembler. I held the "graphic" side, that's why later I ended as junior artist for designing/modelling props and environments in games (Maya, Zbrush, Unreal, all that stuff).
Now I compose music for games, once in a while, when old friends ask me to
Ahhh. I asked because I did some similar consulting back in the UK one time - and they were using Paradox.
Cut my teeth on RSTS/E and Macro-11 assembler back in the day (on ASR-33's with punch tape readers - although we did have some VT100s too) and early 8080 S100 systems as well as 6502 on the Apple II and others. Oh and 6809 (I liked 6809 assembler! ). And then Z80 too. (These young'uns don't remember what it was like to cram something in to 2k of memory and have to use the Z80's internal registers for extra storage )
In fact, my final year Elec Eng project was a ground-up design of a graphics card built around the 6809 as a front end for a vector processor architecture. Them were't days!
OK - my first Personal Computer, an Elliott 4130 mainframe, in 1967. Personal, as in I had control over the power swtch and was the sole user, from bootstrap thru OS to my Applications. Taught myself programming on this beastie. Days however it was the commissioning test host, my actual job was fixing them as they rolled off the production line.
Built a few 8 bitters for work a little later, then Z80's controlled my life somewhat later. Then 68HC11s. Now DSPs.
Wow. Feeling very small and Johnny-Come-Lately, right now
Oh, c'mon @Michael. You have youth and energy. I just creak long these days. All my years are more a few of this, a few of that, not 50 years of intense specialization. I can understand a lot of things, but I'm not a deep diver into many, even now. You guys - and I have to include Taylor for Audulus in this - are at the right part of the technology wave, and doing things I could only dream of back then. I got too many arrows in my back.
Very interesting and informative read especially for a newbie dev like myself - thanks so much for sharing your thoughts Michael. I'm really looking forward to getting acquainted with TAAE2, and if I may, I'd like to share a few thoughts and experiences that you may find amusing. As some of you may know, my app is very simple, and kinda started as an experiment to see how far I could get using Swift and AVFoundation alone. It wasn't going too bad, until I decided I needed "simple" stuff like IAA and a lowpass filter.
Adding IAA support was a real head-scratcher coming at it from Swift and also seems to be totally undocumented in that context as far as I could tell - I Googled myself senseless on that one. I got there in the end, but wasn't expecting to have to use a C bridge to an UnsafeMutablePointer and back again to get IAA working...
As for the filter, the built-in AVAudioUnitEQ (when setup as a lowpass filter) really does not like to be twiddled with in realtime - it's a major crackle-fest - I mean, who on earth would want to change the cutoff frequency of a filter while the audio is playing, right? So then I had to roll my sleeves up and get to grips with Core Audio, and at least try and do it properly. Luckily, I'm reasonably comfortable with C++, but I would liked to have not had to "cobble together" around 500 lines of it just for a lowpass filter And having to do this sort of stuff was driving me a little bit crazy:
_auAudioUnit = _avAudioUnit.AUAudioUnit
On the one hand, I've been pleasantly surprised by what you can do with AVFoundation. On the other, it can also be very confusing and slightly frustrating when you need to dig down into the world of Core Audio and juggle around with no less than three representations of an audio unit. I must admit, one of the reasons I've been holding off in investing time in a 3rd party audio library was to see how much love AVFoundation would get in iOS 10. The answer seems to be - not much, if any! As I try to develop moodscaper to be a more "serious" app, it's becoming increasingly obvious I need a more serious audio engine to do the "heavy lifting" as that's really / clearly not my forte! I'm looking forward to getting into TAAE2 very soon and seeing where that takes me and my current and future app aspirations! All the best, -Rob
Rather smaller than David's @dwarman's setup, our PDP-11/34 setup was similar to the pic below, although the main rack and drives in our setup were in an A/C controlled room separate from the terminals. You can also see a couple of RK05 disk drives (I think they're RK05's - not sure) alongside - storing 2.5MB each (those things like washing machines).
The bootloader was toggled in in octal on the front panel.
http://webserver.ics.muni.cz/bulletin/articles/clanky_img/574_3_v.jpg
Here's the RK05 disk pack!
technology.niagarac.on.ca/staff/mcsele/images/pdp11Images/RK05CartridgeLoadedDoorOpen.jpg
And here are the ASR-33 terminals we used to use. Complete with tape reader!
bytecollector.com/images/asr-33_vcf_02.jpg
And, for network (haha!) connectivity, (dial-up), we had a 300baud acoustic coupler - something like this:
mauseum.net/Hardware/w48mod_1.jpg
Yup - them were't days