Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Machine Learning / Generative Music?

Hi all, any insight into machine learning and music? I feel like technology has gotten that far, im just not sure where it is haha. I would love to take 500 of my songs, stem them out, have the computer learn them and then generate thousands of loops.

Comments

  • For the machine to learn, you'll need to define some criteria and rate your 500 songs in a useful way.
    Without that, it's not really machine learning but rather randomization based on your input material.

  • @rs2000 good point. still no idea where to start haha. im going to do some digging. I found something for M4L in ableton, but it just generates midi based off the midi you feed it. not quite the same thing

  • @rs2000 said:
    For the machine to learn, you'll need to define some criteria and rate your 500 songs in a useful way.
    Without that, it's not really machine learning but rather randomization based on your input material.

    Not necessarily. You could train a machine learning generative model based on the raw audio, for instance. @shinyisshiny if you feel like digging in, here is one post to get you started: https://wandb.ai/authors/openai-jukebox/reports/Experiments-With-OpenAI-Jukebox--VmlldzoxMzQwODg

  • @bleep said:

    @rs2000 said:
    For the machine to learn, you'll need to define some criteria and rate your 500 songs in a useful way.
    Without that, it's not really machine learning but rather randomization based on your input material.

    Not necessarily. You could train a machine learning generative model based on the raw audio, for instance. @shinyisshiny if you feel like digging in, here is one post to get you started: https://wandb.ai/authors/openai-jukebox/reports/Experiments-With-OpenAI-Jukebox--VmlldzoxMzQwODg

    That's an interesting demo, thanks for the link!
    Got any examples that sound more "musical" than the ones shown?

  • No experience, have just seen some blog posts. Looks like the JukeBox from OpenAI has some examples etc. Scroll down here for examples and a good explanation. Note to readers: these samples are generated from scratch by a model, it is no re-sampling going on here.
    https://openai.com/blog/jukebox/

    Takes a long time to train these things oneself, though.

  • @bleep said:
    No experience, have just seen some blog posts. Looks like the JukeBox from OpenAI has some examples etc. Scroll down here for examples and a good explanation. Note to readers: these samples are generated from scratch by a model, it is no re-sampling going on here.
    https://openai.com/blog/jukebox/

    Takes a long time to train these things oneself, though.

    This is fun!
    I wonder though if creating MIDI files based on different artists' compositions wouldn't be a better idea...

  • This brings up an interesting question about how Apple Music uses ML. I listen to my huge music library on random most of the time and notice how there are subtle similarities between consecutive songs—not always but often. For example, a percussive classical piece might be followed by a percussive jazz piece. No other similarity, just the sense of propulsion. Then the next song seems to forgo the ML—an A capella trio, say. Happens enough that I notice. Anyone else have that experience?

  • @rs2000 said:

    @bleep said:
    No experience, have just seen some blog posts. Looks like the JukeBox from OpenAI has some examples etc. Scroll down here for examples and a good explanation. Note to readers: these samples are generated from scratch by a model, it is no re-sampling going on here.
    https://openai.com/blog/jukebox/

    Takes a long time to train these things oneself, though.

    This is fun!
    I wonder though if creating MIDI files based on different artists' compositions wouldn't be a better idea...

    Absolutely! See the link to their earlier work ;)
    https://openai.com/blog/musenet/

    They did audio to see how the more challenging domain would behave.

    With Musenet you can prime the model with six notes of Chopin and ask it to continue in a pop style from there. Quite impressive stuff.

Sign In or Register to comment.