Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

X (was about programming AU and which language to use)

24

Comments

  • @moodscaper said:

    @cian said: Technically it was Simula67 (1967!)

    Ah... I wondered if anyone would mention Simula :smile:

    From Smalltalk-80 The Language:

    Simula used the object/message metaphor only for the higher level interactions in the simulations it implemented

    So technically, it was not a fully object-oriented language, although of course, many internet sources now tell us otherwise. My memory of Smalltalk pre-dates the internet :smile:

    Object-based languages have been around for a long time, but it wasn't until Smalltalk-80 when everything was an object, even classes and methods are objects in Smalltalk-80.

    I think to a large degree, that pureness was also part of its downfall / lack of acceptance. Your average C programmer would look at Smalltalk and then say... um... OK, so how do I do a switch statement or a for / next loop over an array? :smile: As an old colleague used to say, their heads were still stuck in curly brace mode :smile:

    But yeah, Simula definitely got the ball rolling with the whole object / message thang.

    I think SmallTalk's biggest downfall was (and it continued to be a problem for true object languages for decades) was memory and CPU-constraints. Object languages require a lot of garbage collection to keep memory from being too fragmented. With the small memory footprints at the time it created a lot of tension between peformance and memory. And, if I remember correctly, it was a runtime interpreted language rather than compiled -- which was fine if you were writing some software for your colleagues at Xerox Park but didn't lend itself to commercial software publication.

    Not being C (which came to be the dominant language taught on college campuses in the late 70s early 80s) was a disadvantage, but it was ahead of its time. The technology just wasn't there when PCs became a thing to write things like word processors in SmallTalk given the memory and performance constraints of PCs.

  • Many thanks, @brambos !

  • edited December 2020

    @espiegel123 said:
    I think SmallTalk's biggest downfall was (and it continued to be a problem for true object languages for decades) was memory and CPU-constraints. Object languages require a lot of garbage collection to keep memory from being too fragmented. With the small memory footprints at the time it created a lot of tension between peformance and memory. And, if I remember correctly, it was a runtime interpreted language rather than compiled -- which was fine if you were writing some software for your colleagues at Xerox Park but didn't lend itself to commercial software publication.

    It required more powerful computers than were common at the time. The other problem was that Smalltalk compilers/IDEs were really expensive compared to other available options, particularly later once Delphi was available.

    Same thing happened to LISP. You try telling young programmers these days that once upon a time you had to buy your own compiler and it was stupidly expensive.

  • This is possibly one of my favorite threads this year. It's about something I know nothing about (but have been curious), and directly affects almost all the tools and toys I use daily. Insightful, thanks.

  • @brambos said:

    [...]

    If you intend to stay on Apple systems you can use the Accelerate library for SIMD/vectorized code without tying yourself to a single CPU architecture.

    This is the single best piece of advice for dev for AUv3. The Accelerate library is very good. Using the Apple provided frameworks really does make porting to new Apple systems nearly painless. OK, so the transition to the M1 was 100% more difficult than the transition from PowerPC to Intel was. From PowerPC to Intel, I only had to click one checkbox. For Intel to M1 I had to click two. Though, one of them was to enable the iOS to macOS thing too.

    Seriously, using things like the Accelerate library is only going to get better at this point. I think there are probably more accelerators coming to the Apple Silicon processors. Things like the Neural engine, matrix multiplication unit, and the support for image processing is going to make for some seriously powerful libraries to use.

  • @cian said:

    [...]

    I've never used Eiffel (though I've heard good things about it), so maybe I'm missing something here, but I've never encountered a problem where multiple inheritance (or indeed inheritance) was particularly useful. Delegation, interfaces and mixins have always been sufficient for me, and I find them a lot easier to reason about.

    O-O software design and modeling feels very natural to me. The nice thing about a well designed O-O language is that many other constructs like modules, libraries, interfaces, mixins, etc. can fall into one more simple language construct. Betrand Meyer's books O-O Software construction and the one on object modeling are good to look at from this perspective. He breaks down all of his language design considerations and what problems they solve.

    I wouldn't recommend Eiffel for real time audio work though. It's not aimed at this. I've used Eiffel for large simulations for systems engineering type tasks and for communication systems. It's really good for these types of things.

  • @TonalityApp said:

    [...]

    @NeonSilicon All fair points about C++. I'm always curious to hear people's experiences and preferred language constructs – I'll have to look into Eiffel and Sather. What's your favorite/ideal implementation of multiple inheritance?

    Overall, Eiffel. I think it is a more complete language/design in an O-O context than anything else I've seen. Sather did have a couple of things I liked better, it's iterator looping construct was really slick. It let you do all sorts of powerful things when designing data structure and it kept all of the iterator code right in the same class as the data structure itself -- really easy to maintain. But, Sather appears to be completely dead now. I tried to get a build of it going a few months ago and it looks like it would take a ton of work to get it running today.

  • edited December 2020

    X

  • My not minding Objective-C probably is influenced by my having done some work for NeXT off-and-on for a few years when they were starting up and did a couple of bootcamps there learning Objective C. The guy that taught it made a case that I found convincing at the time as to why Objective-C was preferable to C++ -- though I don't recall the particulars at the time. It certainly was easy to learn.

  • edited December 2020

    X

  • @espiegel123 said:
    My not minding Objective-C probably is influenced by my having done some work for NeXT off-and-on for a few years when they were starting up and did a couple of bootcamps there learning Objective C. The guy that taught it made a case that I found convincing at the time as to why Objective-C was preferable to C++ -- though I don't recall the particulars at the time. It certainly was easy to learn.

    Very cool! I had the original NeXT cube. I loved that computer. Although, that optical disk setup it had was disaster. Mine kinda exploded and threw bearings everywhere. Still, that is one of my all time favorite computers.

  • edited December 2020

    @mercurialization Scripted languages are generally the exact opposite of what you want. They’ll be slow and unsafe, even with most compiled variants of Python. One of the biggest drawbacks (apart from speed) would be the lack of static type checking, which makes larger applications unwieldy and error prone.

    Languages that do work are those that are “closer to the metal”. These are compiled, do not require locking or complex dispatch for basic functionality, and include C, (some) C++, Rust, and others

  • @mercurialization said:
    This is somewhat unrelated, but what languages are safe enough for AUv3 programming? I’ve been researching several languages in the past few days and I’ve really enjoyed reading the syntax of Python and JavaScript/CSS/HTML.

    Are there any notable iOS applications programmed in these languages? What would the drawbacks be?

    Speed and performance isn't going to be enough for audio dev. Now, that comes with a big caveat. They are both interpreted languages. So, there is overhead and no realtime programming. But, for a lot of use cases, Python is really just a thin layer over C or even FORTRAN libraries. I like Python a lot for scientific kinda work. (I really don't like Javascript, but that's a different story.) iOS at this point still bans the most important thing that makes both of these languages usable (JIT compilation). There might be some thing that people have cooked up to make Python compilable to iOS applications. I don't know of any though.

    Javascript is responsible for more poorly performing applications than anything else I know of. But, there are tons of them out there now and the number is growing very fast. If you want to be employable as a programmer, learn Javascript. Personally, I'd rather go hungry.

  • @NeonSilicon said:

    @espiegel123 said:
    My not minding Objective-C probably is influenced by my having done some work for NeXT off-and-on for a few years when they were starting up and did a couple of bootcamps there learning Objective C. The guy that taught it made a case that I found convincing at the time as to why Objective-C was preferable to C++ -- though I don't recall the particulars at the time. It certainly was easy to learn.

    Very cool! I had the original NeXT cube. I loved that computer. Although, that optical disk setup it had was disaster. Mine kinda exploded and threw bearings everywhere. Still, that is one of my all time favorite computers.

    The opticals were a disaster. They seemed like such a good idea but they were so slow and it turns out unreliable. The downside of trying to be bleeding edge. There was really cool electronic music software on it hidden away -- I think some guys from the Stanford Computer Music Research were convinced to use CuBEs to develop on. There was a really cool program that modeled the vocal tract . If I remember correctly, my first exposure to MAX was on NeXT -- this was before it was a commercial product. Lots of really cool stuff. Steve really wanted to showcase the arts applications of the technology. If I recall correctly, Pixar's Tin Toy animation was a sort of proof-of-concept that NeXT's distributed multi-processing would make computer animation more practical. The rendering engine would look for idle computers on the intranet at night and take over the CPU of any not in use and render frames. I think it took something like the entire summer to render a few minutes. (It's possible that this was Tin Toy's precursor -- I'm getting old).

  • Javascript: https://www.destroyallsoftware.com/talks/wat
    The JS part starts about a minute and a half in or so.

  • @espiegel123 said:

    @NeonSilicon said:

    @espiegel123 said:
    My not minding Objective-C probably is influenced by my having done some work for NeXT off-and-on for a few years when they were starting up and did a couple of bootcamps there learning Objective C. The guy that taught it made a case that I found convincing at the time as to why Objective-C was preferable to C++ -- though I don't recall the particulars at the time. It certainly was easy to learn.

    Very cool! I had the original NeXT cube. I loved that computer. Although, that optical disk setup it had was disaster. Mine kinda exploded and threw bearings everywhere. Still, that is one of my all time favorite computers.

    The opticals were a disaster. They seemed like such a good idea but they were so slow and it turns out unreliable. The downside of trying to be bleeding edge. There was really cool electronic music software on it hidden away -- I think some guys from the Stanford Computer Music Research were convinced to use CuBEs to develop on. There was a really cool program that modeled the vocal tract . If I remember correctly, my first exposure to MAX was on NeXT -- this was before it was a commercial product. Lots of really cool stuff. Steve really wanted to showcase the arts applications of the technology. If I recall correctly, Pixar's Tin Toy animation was a sort of proof-of-concept that NeXT's distributed multi-processing would make computer animation more practical. The rendering engine would look for idle computers on the intranet at night and take over the CPU of any not in use and render frames. I think it took something like the entire summer to render a few minutes. (It's possible that this was Tin Toy's precursor -- I'm getting old).

    Yeah, I have a faint memory of that. (I'm getting old now too.) The best thing for me was that it came with Mathematica. That helped me so much in my research as an astronomy student. It was great for the visualizations of the simulations we were doing.

    The idea behind the optical disk was brilliant. The point of being able to bring your entire environment and pop it into a lab computer was great. I actually get reminded of it now whenever a push an SD card into a Raspberry Pi. You could do the same thing with school labs using these (minus the exploding bearings thing).

  • edited December 2020

    @mercurialization said:
    I’m not sure if you’re being sarcastic — I’m new here — but I had assumed you programmed your apps in addition to the design as well.

    I definitely do. But I don't consider myself a coder. I have no background in it (other than 30 years of doing it my way), and I'm not particularly interested in it. I just try to learn whatever I need to learn in order to build a high performing, problem free product. The thought of learning another language because of more elegant closures etc. doesn't appeal to me and is a waste of time in my book. Swift doesn't offer me anything other than losing a ton of time and existing codebase for zero gain.

    I don’t foresee any issues programming the UI and functions of the app. The confusion arises when trying to implement it as an AUv3 app because I can’t use the language that I’ve been studying to pass commands. I assume it’s not too bad if you already have experience in Objective-C though. I don’t plan to pass or receive audio at this point, only MIDI, so hopefully that will reduce the complexity of any separate language that is required.

    You won't have to do DSP, but you'll still have to grasp Apple's concept and design patterns behind audio buffers and frame timing, because the MIDI handling in AUv3 is combined with the audio processing. And the MIDI stuff is arguably more obtuse and less documented than the audio part of AUv3.

  • @brambos said:

    @mercurialization said:
    I’m not sure if you’re being sarcastic — I’m new here — but I had assumed you programmed your apps in addition to the design as well.

    I definitely do. But I don't consider myself a coder. I have no background in it (other than 30 years of doing it my way), and I'm not particularly interested in it. I just try to learn whatever I need to learn in order to build a high performing, problem free product. The thought of learning another language because of more elegant closures etc. doesn't appeal to me and is a waste of time in my book. Swift doesn't offer me anything other than losing a ton of time and existing codebase for zero gain.

    Software is a big world, really big. There are entire industries built around writing software they way you've just described. So, if you ask me, you are a programmer. I understand what you mean though, I've been writing software in a lot of different settings for most of my life now, but I don't consider myself a "Software Engineer" because I don't have that training. I know it's kinda weird, but that's the way I think about it.

    I'll quote myself from a comment I made when learning Swift, "They can have my Objective-C when they pry it from my cold dead hands." Now I'd say that I don't mind it and I'd actually prefer if I could get rid of the Objective-C++ parts of my AU dev. It would make it cleaner. This isn't because I think Swift gives me anything I need or want. For example, I think the closures in Swift make the code harder to maintain and read. (I say this even though I think Haskell is the most fun and coolest language I've ever used.) But because Swift is OK. You'll get used to it fast and it's clean enough to write in.

    I don’t foresee any issues programming the UI and functions of the app. The confusion arises when trying to implement it as an AUv3 app because I can’t use the language that I’ve been studying to pass commands. I assume it’s not too bad if you already have experience in Objective-C though. I don’t plan to pass or receive audio at this point, only MIDI, so hopefully that will reduce the complexity of any separate language that is required.

    You won't have to do DSP, but you'll still have to grasp Apple's concept and design patterns behind audio buffers and frame timing, because the MIDI handling in AUv3 is combined with the audio processing. And the MIDI stuff is arguably more obtuse and less documented than the audio part of AUv3.

    Ouch, being less documented than Apple's AUv3 audio spec pretty much means not documented at all. I haven't tried doing any MIDI generating in AUv3 yet. I'm confused as to why it's been tied directly to the audio processing callbacks. I can see why the events are feed to an audio AUv3 with the event timing, but I don't understand why in general. It feels like it makes it less general without any benefit. Do, you know why it's done this way? Are they looking at using MIDI 2.0 to do all of the parameter automation in AU's at some point?

  • @NeonSilicon said:
    Ouch, being less documented than Apple's AUv3 audio spec pretty much means not documented at all.

    The first link mentioned in this thread is me reverse engineering how the hell MIDI is supposed to work in AUv3 (together with AUM's Jonatan) based on nothing but a cryptic slide from a WWDC'17 presentation and some barely commented C headers from the AU framework.

    I think others have taken those and turned them into slightly more useful tutorials and sample code, but that's pretty much it :D

    I haven't tried doing any MIDI generating in AUv3 yet. I'm confused as to why it's been tied directly to the audio processing callbacks. I can see why the events are feed to an audio AUv3 with the event timing, but I don't understand why in general. It feels like it makes it less general without any benefit. Do, you know why it's done this way? Are they looking at using MIDI 2.0 to do all of the parameter automation in AU's at some point?

    It makes sense though. All incoming and outgoing MIDI events can be synced with the audio down to the frame level. They're also part of the same realtime-safe render call to ensure everything runs on the same thread at the same moment. I suspect any other construct may have introduced jitter and race conditions and other nasty side effects.

    If you already know how to make an AU plugin it's only a few minor extra steps to add MIDI to them. But I'd say that's pretty much a precondition to make the concepts somewhat comprehensible.

  • @brambos said:

    [...]

    It makes sense though. All incoming and outgoing MIDI events can be synced with the audio down to the frame level. They're also part of the same realtime-safe render call to ensure everything runs on the same thread at the same moment. I suspect any other construct may have introduced jitter and race conditions and other nasty side effects.

    If you already know how to make an AU plugin it's only a few minor extra steps to add MIDI to them. But I'd say that's pretty much a precondition to make the concepts somewhat comprehensible.

    Yeah, that does make sense. I guess that my issue with thinking about it is that I want to think about MIDI in the context of communication between different applications and devices even. If you are inside the space of a single host, then syncing it to the audio thread/timing will make it perform better and even easier to reason about.

  • @NeonSilicon said:
    O-O software design and modeling feels very natural to me. The nice thing about a well designed O-O language is that many other constructs like modules, libraries, interfaces, mixins, etc. can fall into one more simple language construct. Betrand Meyer's books O-O Software construction and the one on object modeling are good to look at from this perspective. He breaks down all of his language design considerations and what problems they solve.

    I think my problem with OO is that I just don't believe it fits most of the programming problems I personally run into. Functional programming is generally a better fit, and what I found with OOP was that I would spend alll my time trying to create very artificial OO models, or writing objects to wrap what was essentially an array/dictionary. I feel after 25 years that the promises of OO (reuse, less code, better maintenance) mostly haven't been realized. Obviously others disagree on this and that's fine.

    The other issue with OO is that on modern CPUs it results in pretty slow code. That doesn't always matter, but if you care about performance it's a problem. I know a guy who was working on a high performance Java application (for some trading thing) and by making it essentially not OO (but using the kind of arena/entity patterns you'd see in the games industry), he massively improved performance.

    I wouldn't recommend Eiffel for real time audio work though. It's not aimed at this. I've used Eiffel for large simulations for systems engineering type tasks and for communication systems. It's really good for these types of things.

    Yeah I can see that. For communications stuff I really like actor based systems. Elixir is a really nice mixture of actors and functional programming if you haven't seen that.

  • @brambos said:
    I definitely do. But I don't consider myself a coder. I have no background in it (other than 30 years of doing it my way), and I'm not particularly interested in it. I just try to learn whatever I need to learn in order to build a high performing, problem free product. The thought of learning another language because of more elegant closures etc. doesn't appeal to me and is a waste of time in my book. Swift doesn't offer me anything other than losing a ton of time and existing codebase for zero gain.

    When professional programmers say elegant what they typically mean is that this will result in easier to understand/maintain code, fewer bugs and faster development time.

    So the reason that closures are good is not because they're theoretically better, but simply because they solve certain common problems with fewer lines of code and bugs. These things tend to be more important when you have larger code bases which are maintained by a team over a period of years. There are a lot of coding practices that work just fine when you're working solo on code that you (mostly) won't be returning to - which fail to scale to industrial scale systems.

    So that's a long winded way of saying I can see how Swift possibly doesn't offer you anything useful, while it is still a significant improvement for IOS programmers as a whole.

  • edited December 2020

    @NeonSilicon said:
    For example, I think the closures in Swift make the code harder to maintain and read.

    I haven't used Swift so maybe this is a problem with Swift, but generally I find closures make code easier to maintain and understand. I used to hate having to write a special class in Java just to create what was essentially a closure. They also allow you to write incredibly powerful code in a few lines of code - which I'm personally a fan of.

    I think Haskell is really neat - though having tried to use it for real projects (hobby stuff, but real ones) I rapidly became disenchanted... It's worth learning as it really expands your ability to think about problems (and how to solve them), but I would not want to use it professionally. That level of abstraction makes my brain hurt.

  • @cian said:

    @NeonSilicon said:
    For example, I think the closures in Swift make the code harder to maintain and read.

    I haven't used Swift so maybe this is a problem with Swift, but generally I find closures make code easier to maintain and understand. I used to hate having to write a special class in Java just to create what was essentially a closure. They also allow you to write incredibly powerful code in a few lines of code - which I'm personally a fan of.

    I think Haskell is really neat - though having tried to use it for real projects (hobby stuff, but real ones) I rapidly became disenchanted... It's worth learning as it really expands your ability to think about problems (and how to solve them), but I would not want to use it professionally. That level of abstraction makes my brain hurt.

    Java is definitely not the way to do closures in an O-O setting. But, Java isn't the way to do anything in an O-O setting (or any other setting if you ask me.) Swift definitely isn't that bad. Closures are done pretty well in Swift actually. The old way to do them in Objective-C would have been function pointers (or selectors). This method leads to more-or-less automatic functional decomposition and reuse at the function level. The use of closures in Swift tends to large closure blocks that are executed on different threads asynchronously and the naming and parameters are obfuscated by syntactic sugar. This really isn't a huge issue in any way. But I can see why people might prefer the old Objective-C way. But, in modern Objective-C you get pretty much the same thing as Swift closures using blocks. I haven't checked, but I'd bet that you can probably use Objective-C blocks as Swift closures through the Swift Objective-C bridge.

    Haskell is really fun to me. My son says that you'd need a Ph.D. in math to program in it. He's kinda close to the truth. It certainly hasn't seen widespread adoption and there are good reasons for that. One of the reasons I like it is that I can go back and look at code I wrote in it 10 years ago and know exactly what I was doing easily. SML is a nice functional language too. There are certain settings where that is what I'd choose to use.

    I haven't learned Elixer. I have played with Erlang in the past and found it to be pretty slick. I should look at Elixer because it does look interesting. D has some similar features to Elixer. D is pretty neat in lots of ways. And, people actually do write audio code in D.

  • I started digging into auv3 development this weekend. Built a few open source projects, tweaked Apple's own examples, and then quickly ran into app id allocation limits, since each extension has a separate id. Super annoying for sure. I'm brand new to DSP, so I've started with just trying to wrap my head around it by generating a sine wave oscillator. My understanding is that each internalRenderingBlock call will get a frameCount that tells you how many samples to process. So I would need to generate values from a sine wave for each frame that call expects, and assign them to the audio buffer. Is that the general idea?

  • @jscheel Yup! The next step would be changing the pitch

  • @jscheel said:
    I started digging into auv3 development this weekend. Built a few open source projects, tweaked Apple's own examples, and then quickly ran into app id allocation limits, since each extension has a separate id. Super annoying for sure. I'm brand new to DSP, so I've started with just trying to wrap my head around it by generating a sine wave oscillator. My understanding is that each internalRenderingBlock call will get a frameCount that tells you how many samples to process. So I would need to generate values from a sine wave for each frame that call expects, and assign them to the audio buffer. Is that the general idea?

    I already understand that i never ever will produce anything audio or MIDI related :D :# :o

    I will be happy, when i get a working App that runs on iPhone and iPad and correctly rotates - nothing more at all.
    Getting it to accept input from the share sheet would be the next thing ...

  • One thing is that I don't understand from documentation and examples, what is the way to store for example MIDI data in AudioUnit parameters like Atom do. Can we add nodes and parameters to the tree dynamically? (it seems there are no methods for it). Or do we destroy and create again _parameterTree object when we need to allocate more nodes? Or I dunno

  • or do we simply need to use fullState fullDocumentState?

  • For what reason did you destroy this topic by renaming it to "X" and replacing all of your texts by "X" too, @mercurialization

    This is an affront to all other members of the forum!

Sign In or Register to comment.