Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

The Mathematical Artist / Algorithmic Creators Let Loose

This topic is not quite the same as an earlier thread that apps are my robotic partners. In that thread I was thinking more about robots doing the heavy lifting according to my wishes. But as I continue along generating full arrangements from a piano or drum track I am realizing more and more that the algorithmic choices a lot of apps make when applying themselves to prerecorded
MIDI are often better ( or more surprising) than I would have come up with myself.

It is particularly noticeable to me when a MIDI drum track is the basis. In this case tonal values are assigned to sounds that are often non tonal. I have no idea how this happens, but it greatly influences how I construct a track. Rather than traditional composing, it is more like “found art”. I assign a voice, there is a musical result, and I select the pieces that work for me. In this way making music becomes more like sculpture, as I slowly chip away the parts that don’t work. In my iOS beginnings I was more of a traditionalist, creating individual tracks one at a time. But, like using a touch screen is more fun, so is getting a musical package delivered and slowly unwrapping it.

I probably will return to making individual tracks, even as I now routinely add an improvised lead line over the MIDI generated arrangement. But turning a piano recording into a guitar solo and its ilk will always be in my mind as an option.

How do you think this compares to randomly generated patterns and might they be my next stop?

Comments

  • edited April 2019

    "Gardening not architecture" - Brian Eno, Oblique strategies

    https://www.edge.org/conversation/brian_eno-composers-as-gardeners

    Edit: - my personal practice is very often to build machines to improvise with (something I suspect you might guess from my first two apps) - if you search on youtube for 'junklight' you can see some of my videos too.

  • The user and all related content has been deleted.
  • the box playing it self isn't touching, its boring plim plim or just garbage?

    I find this aspect fascinating. So if I carefully craft my composition to be played by my machine vs guiding and playing with my machine.....

    & yes - my preference in my own music is the latter. BUT could you tell in a blind test?

  • edited April 2019
    The user and all related content has been deleted.
  • McDMcD
    edited April 2019

    I’d be shocked to see you go with AI music at the core given your lifetime as a player. But you shock me repeatedly and we shall see.

    AI makes you a bit of an abstractionist (pollack) vs the photorealistic painters. You start curating color and texture without the forms of traditional musical history and culture.

    If you want to try AI I recommend TC-11 or better yet TC-Data since it outputs pure MIDI played with touch on the iPad. Lot’s of presets.

    Go crazy. Too late? Short putt.

  • We used to play in clubs, jamming with good and bad musicians, learning our craft as we fumbled along toward fleeting moments of insight and bliss. Now we jam with developers and their algorithms, pulling them in for a jam, knowing we can toss aside the pieces that don't work in order to build on the little bits of unexpected insight and bliss that we trap on the glowing screen.

  • @lukesleepwalker said:
    We used to play in clubs, jamming with good and bad musicians, learning our craft as we fumbled along toward fleeting moments of insight and bliss. Now we jam with developers and their algorithms, pulling them in for a jam, knowing we can toss aside the pieces that don't work in order to build on the little bits of unexpected insight and bliss that we trap on the glowing screen.

    Nice use of prose, that.

  • edited April 2019

    @LinearLineman said:
    How do you think this compares to randomly generated patterns and might they be my next stop?

    It's all about the algorithm, the formula behind what happens with these MIDI notes.
    A completely random algorithm that does not apply any music theory will give random results that will likely need some manual work like @Max23 said in order to sound enjoyable.

    However, a more "intelligent" algorithm has more or less musical knowledge built-in and it will be much better at providing "musically pleasing" results.
    They can only be as good as their creator, so the random output of different tools varies a lot.
    A good example of someone who has put a lot of effort and knowledge into such an algorithm is Stephen Kay from Karma Labs. He has written a few patents that are really worth reading. He basically tried to put a lot of experience and music theory into his code, and if you have ever used the KARMA function in a Korg or Yamaha keyboard/synth, you'd know what I mean.
    Where this stuff gets even more interesting is when an algorithm additionally has knowledge built-in that has been captured from thousands of existing songs (a neural network can be the basis), so the manual picking-out-the-best phrases can be automated too.
    Depending on what songs the developer has used to train the algorithm, it would be able to pick out snippets that match a certain rhythmic and melodic style, for example.

    See
    http://neuroph.sourceforge.net/tutorials/MusicClassification/music_classification_by_genre_using_neural_networks.html
    for an example of an algorithm that could be modified to recognize and categorize potentially useful snippets of a random mess into well-known categories automatically.

    Although I find the technology exciting, I'm not a fan of using such algorithms for composition. In fact, they either represent "mainstream taste" or the fingerprint of another composer, and using these, to me, feels like letting others do the creative work. It's fun to play around with them however, and they could even inspire me to write something completely different.

  • edited April 2019
    The user and all related content has been deleted.
  • @Max23 said:

    @rs2000 said:

    @LinearLineman said:
    How do you think this compares to randomly generated patterns and might they be my next stop?

    It's all about the algorithm, the formula behind what happens with these MIDI notes.
    A completely random algorithm that does not apply any music theory will give random results that will likely need some manual work like @Max23 said in order to sound enjoyable.

    However, a more "intelligent" algorithm has more or less musical knowledge built-in and it will be much better at providing "musically pleasing" results.
    They can only be as good as their creator, so the random output of different tools varies a lot.
    A good example of someone who has put a lot of effort and knowledge into such an algorithm is Stephen Kay from Karma Labs. He has written a few patents that are really worth reading. He basically tried to put a lot of experience and music theory into his code, and if you have ever used the KARMA function in a Korg or Yamaha keyboard/synth, you'd know what I mean.
    Where this stuff gets even more interesting is when an algorithm additionally has knowledge built-in that has been captured from thousands of existing songs (a neural network can be the basis), so the manual picking-out-the-best phrases can be automated too.
    Depending on what songs the developer has used to train the algorithm, it would be able to pick out snippets that match a certain rhythmic and melodic style, for example.

    See
    http://neuroph.sourceforge.net/tutorials/MusicClassification/music_classification_by_genre_using_neural_networks.html
    for an example of an algorithm that could be modified to recognize and categorize potentially useful snippets of a random mess into well-known categories automatically.

    if heard some pretty impressive results from ai
    but I thought they feed it to much Mozart lol
    I doubt that anything really new will come from this as all ai I've heard is only fed with western music theory ...

    That's the point!
    But you could roll your own and feed it with whatever music you like :smiley:

  • edited April 2019
    The user and all related content has been deleted.
  • @Max23 said:

    @rs2000 said:

    @Max23 said:

    @rs2000 said:

    @LinearLineman said:
    How do you think this compares to randomly generated patterns and might they be my next stop?

    It's all about the algorithm, the formula behind what happens with these MIDI notes.
    A completely random algorithm that does not apply any music theory will give random results that will likely need some manual work like @Max23 said in order to sound enjoyable.

    However, a more "intelligent" algorithm has more or less musical knowledge built-in and it will be much better at providing "musically pleasing" results.
    They can only be as good as their creator, so the random output of different tools varies a lot.
    A good example of someone who has put a lot of effort and knowledge into such an algorithm is Stephen Kay from Karma Labs. He has written a few patents that are really worth reading. He basically tried to put a lot of experience and music theory into his code, and if you have ever used the KARMA function in a Korg or Yamaha keyboard/synth, you'd know what I mean.
    Where this stuff gets even more interesting is when an algorithm additionally has knowledge built-in that has been captured from thousands of existing songs (a neural network can be the basis), so the manual picking-out-the-best phrases can be automated too.
    Depending on what songs the developer has used to train the algorithm, it would be able to pick out snippets that match a certain rhythmic and melodic style, for example.

    See
    http://neuroph.sourceforge.net/tutorials/MusicClassification/music_classification_by_genre_using_neural_networks.html
    for an example of an algorithm that could be modified to recognize and categorize potentially useful snippets of a random mess into well-known categories automatically.

    if heard some pretty impressive results from ai
    but I thought they feed it to much Mozart lol
    I doubt that anything really new will come from this as all ai I've heard is only fed with western music theory ...

    That's the point!
    But you could roll your own and feed it with whatever music you like :smiley:

    I wonder what happens if you feed it with the experimental side of Wendy Carlos when she is really sailing away with the pitch ...

    I bet @LinearLineman would like it! 😉

  • edited April 2019
    The user and all related content has been deleted.
  • I program in python for writing midi and am thinking about this topic a lot. Mainly the thing about traditional software is that it pushes the user to write to a grid based on common practice sheet music and theory, and then if we are lucky, allows us to "humanize" the results.

    I much prefer a system which would respect the human pulse first, and build a grid from that.

    The variable tempo recognition in Music Memos which has migrated into logic has been such an inspirational approach in capturing that so I am interested in using that as the framework for every song I make.

    But I also think computer music can go even deeper into that human pulse.

    I definitely like the idea of fishing snippets, but if those snippets are based on mathematical grids I have to wonder if it is possible to truly achieve an organic final result.

    I'm basically in the process of spec'ing out what this would look like for my own rhythm and arp generator.

    But of course that requires math to capture events, but if those events are "humanized" from the jump, maybe they will ring even truer.

  • @rs2000, thanks for the explanation I was looking for. I am not averse to using the "premanufactured home" concept. As I am off the musical grid by a sixteenth note it seems I use these snippets in and out of context. I have corrupted every genre I attempt, so absolute corruption is absolutely fascinating.

    @lukeskywalker. Count me in. Guilty as charged! @applehorizon, what you said reached me. The human pulse seems to be the start of it all. The fetal heartbeat... is it the same as the mom's heartbeat, or does it foreshadow counterpoint and swing?

  • If you want to geek out on an interview with two of the modular scenes best minds this interview with Music Thing's Tom Whitwell is a corker. Tom's most famous product is the Turing Machine, which is the epitome of Bryan Eno's "Gardening not architecture" quote. There's a free Max for Live version of the Turing Machine (luckily Tom much like Mutable Machines makes everything he does open source so others can retrofit and enhance to their heart's content), and I'm hoping some bright spark makes a clone for iOS. It would be a perfect tool for our emergent "Digital Modularity" iOS audio scene.

    Anyway enough chat, here's a link to the video (It starts around the 5-minute stage, as the opening part is intro guff).

Sign In or Register to comment.