Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Pixel Music - make music from images

I want to share the first demo for an iOS music app that I am currently working on:

The gist of it is that a picture is converted into a sequencer grid of various sizes and the sequencer notes get calculated based on the underlying pixel data. For the video above, I used the following images as a source, although in the final version you will be able to use a photo from the phone's camera or library:
https://images.pexels.com/photos/132037/pexels-photo-132037.jpeg
https://dingo.care2.com/pictures/greenliving/1407/1406075.large.jpg
https://orig00.deviantart.net/1551/f/2018/030/6/2/untitled_by_juhaniviitanen-dc1mu7e.jpg

In the demo, I have three simulators running side by side, but for later versions, I plan on adding support for running multiple sequencers (with different settings and images) from a single app. The sequencer is Ableton Link compatible and this is how I managed to get the three apps and Ableton Live to stay in sync. There's also a simple synth built in with the standard options to get things started, but where I think this app will really shine is sending MIDI to more robust synth apps (which btw works but is not in the demo).

Right now the note pitches are calculated as an average of the RGB values of the pixels and the velocity is static, but there are many other possible combinations here, and I intend to expose this such that people can come up with their own interesting algorithms. The UI is pretty rough at this point and I'll work on improving it after the basic functionality is finished. While the demo is focused on the iPhone interface, the app will be universal and work on the iPad as well.

A lot has been done, but there's still a ton left to do. If you're interested in getting updates on the progress of Pixel Music, please subscribe using the following form: http://eepurl.com/dIHsxP

Thanks,
Andrei

P.S. - of course, it will also feature Audiobus support :)

«13456

Comments

  • Nice, I’ve confirmed my humanity (always a good thing to do on a Thursday) and subscribed.

    Interested to see where this one goes ... maybe “animating” the image would be fun (applying blurring, warping, shifting ... thinking photoshop kind of effects).

    First of many random requests you’re likely to get around here. ;)

  • Hey @TheVimFuego , thanks a bunch for subscribing and for the feedback - I'm curious about the effects you're talking about and what you're thinking of getting out of them - my first hunch / thought is a slightly different image and as such a slightly different sequence as a result - right ?
    Although some techniques, like blurring, might not have much effect because I believe the resulting pixel image will be more or less the same. Anyway, I'll try this out and share the results :smile:

  • @andreitonescu said:
    Hey @TheVimFuego , thanks a bunch for subscribing and for the feedback - I'm curious about the effects you're talking about and what you're thinking of getting out of them - my first hunch / thought is a slightly different image and as such a slightly different sequence as a result - right ?
    Although some techniques, like blurring, might not have much effect because I believe the resulting pixel image will be more or less the same. Anyway, I'll try this out and share the results :smile:

    Yes, I’m thinking something like a generative version where the transform is applied every x beats. Maybe a degree of randomness in there so you start at one place and end up somewhere musically related but different.

    A bit of probability might work, so there is x% chance to play a step.

    And, yes, some effects may be so subtle they are not noticable. Some, like inverting the colours, effectively playing the “negative” image might be a bit more dramatic.

    I’d keep it simple to start off with, I don’t know how you’re going to price the app but some more sophisticated stuff could be bundled as IAPs if you get enough interest.

  • Awesome app! Subscribed to the list too

    Maybe an FX could be that each even line shifts x pixels left and the odd lines x pixels right and whatnot

    @andreitonescu Does the notes stick to a scale? Or output midi (so we could filter the scale with something like midiflow)

  • edited October 2018

    @andreitonescu Cool! Also awesome demo movie! Just curious how long this took to make?

  • Is this similar to what NightRadio done a few years back with PixelVisor.

  • brilliant it looks awesome

  • I’m very interested in this; particularly in use with bold geometric and / or typographic design imagery. Would it be possible to demo how the app reacts to the below examples or similar imagery?



  • Subscribed. I am very intrigued by this, and look forward to its coming to market.
    Good luck!

  • Thanks everybody for the great feedback.
    @TheVimFuego I'm definitely struggling with keeping this simple and at the same time having useful options. I'll likely experiment with some generative features for the sequencer to see how they go - the downside is that, IMHO, that will add some complexity since I would probably like the fitness functions / generative rules to be somehow exposed such that the user can access them. Btw you should checkout http://yuriturov.com/xynthesizr/, I believe this sequencer has some generative/evolving options.

    In relation to what @senhorlampada was saying, a degree of customisation that I'll likely experiment with is having more options for how the sequencer traverses the pixel grid. Right now it's the standard top-down, left to right - but there are many other ways to do this and they'll likely output interesting results.

    I've already built in a scale + mode quantization option and the app also outputs MIDI. I found it very enjoyable when routed out to something like Animoog - maybe I'll post an example video on this as well. There's also an octave range selector where you can restrict the notes generated to be in certain octave ranges.

    @greengrocer I think it's been about 4-5 months now, but this is not full time as I do this in my spare time afterwork and in weekends. I'm hoping to have it out this year but we'll see how that goes. Looking back, the thing that took the most time was integrating with Ableton Link - as I had to write out the Sequencer from scratch since the one from AudioKit (the framework that I'm using at its base) was not sufficiently robust for my needs. After I finish the app, I'm planning to release the sequencer logic on github so that anyone interested can take a look/use it.

    @Jumpercollins very nice project, I was not familiar with it. It is vaguely similar considering only the first part of the chain for PixiVisor, but the audio outputted from the video is not musical in any way and serves the sole purpose of encoding and transmitting the information over to the receiver. On Pixel Music, I convert an image to a pixelated version of it and then generate notes from those pixels based on certain algorithms.

    @brice interesting selection of images. In my tests I always preferred to use nature / landscape images, but I'll give it a try with something more geometric as well. Right now, the algorithm I use for calculating the notes simply averages the red, green and blue values of the generated pixel grids and calculates the pitch based on that (simple linear interpolation 0-255 to 0-127).
    Where I think stuff could get more interesting with this kind of images is the ability to experiment with other types of algorithms, like link the red value with the pitch, the blue value with the velocity, and so on. This part of choosing/editing the algorithm is not yet built but I'll definitely include it in a form or another when I release this. Anyway, I'll run these images through what I have now and see what I get :smile:

    Thanks everybody for the interest!

  • @andreitonescu I'm super interested in this too. Just signed up for your mailing list and will follow the vimeo channel too. :)

  • Might be of some use
    Processing Pixels to hear what you see

    http://www.3blue1brown.com/videos/2017/5/26/the-hilbert-curve

    It might be over the scope or way that you're handling the conversion but food for thought.

  • Similar apps

    Spinphony by Mark Nilsen
    https://itunes.apple.com/us/app/spinphony/id531017215?mt=8

    Melodist - Let photos sing by Sonar Multimedia
    https://itunes.apple.com/us/app/melodist-let-photos-sing/id1115047756?mt=8

    And 2 or 3 of Alexander Zolotov's apps
    Including Pixvisor
    https://itunes.apple.com/us/app/pixivisor/id601279460?mt=8

  • @andreitonescu Yes, I'm a big fan of Xynthesizr, it's a gem.

    Follow your instinct regarding complexity ... so many ideas to filter ... on that note, animated GIFs could be interesting ...

  • Hey Everybody,

    It’s been a while, longer than I wanted to, but I have an update on Pixel Music. You can check a new version of the app here: https://www.youtube.com/watch?v=LEHZdZ_S6vI, recorded on a real iPhone.

    The first part of the demo features the built in synth while the end sequence has the MIDI notes sent to AudioKit SynthOne and Animoog - the screen recorder did not work with the two synths running so that’s why there’s no video :)

    From the first version presented, I added the possibility of having multiple sequencers running at the same time. Right now there’s no limit, I’ll have to see from usage when things start to go weary in terms of performance and so on. 

    There are a ton of other options to explore but I think it is in a state where I can share it with people as an alpha/beta version via testflight so that I can have live feedback. Besides the app itself, I think in this environment it’s essential to see how it integrates in everybody’s workflow (audiobus, ableton link, etc).

    I’ll figure out a way how to do that and share it here and on the mailing list (http://eepurl.com/dIHsxP).


    Thanks,

    Andrei

    P.S. the UI also has an adapted landscape version for the iPad :) 


  • And here's another jam with less demo, more music, straight to some hardware synths from the iPad version and some random photos from my gallery :)

    Cheers,
    Andrei

  • 5 Sequencers all with midi out and channels to boot. Lots of Randomizer options all based on photo pixels crazy !

  • Nice jam. Looking forward to the app.

  • @andreitonescu it’s a very cool app. I’m having some issues and crashes though. Have been reporting them via TestFlight messaging. Are you getting them?

  • Deff would be rad with midi out! Looking forward to trying this

  • @reasOne said:
    Deff would be rad with midi out! Looking forward to trying this

    It’s a public beta I believe so anyone can try it. Midi out is already built in.

  • @Jumpercollins said:

    @reasOne said:
    Deff would be rad with midi out! Looking forward to trying this

    It’s a public beta I believe so anyone can try it. Midi out is already built in.

    About the Midi Out...
    It took me a minute to find it in landscape mode. It is more easily discovered in portrait mode.
    But now that I found the Midi Out section, how do you actually activate it? I can’t seem to enable it.
    Also, trying to load it in an AB3 midi input slot just results in endless spinning, but never finishing the app loading routine.

  • @CracklePot you should see available apps that have a midi in port open in the Midi Out section for each sequencer, and you can tap to select/deselect them for sending midi. This section definitely needs some improvements on the UI / discoverability.

    Also, the app isn't yet Audiobus/IAA compatible, but for some reason it appears to other apps as though it is. I have previously started working on this part, but deprioritised since I had to take on bigger issues. I'll definitely focus on solving this for the next update as it's tightly integrated into everybody's workflow.

    Cheers,
    Andrei

  • @andreitonescu
    Ah yes. It is working great now. Had 3 sequences sending to AUM just now.

    For some reason, when I tried last night, the box that lists available Midi ports was showing up empty white. Not even the usual Network Session 1 was showing up. I was trying to tap that big white rectangle like it was a big button. :D

    But it is working great now.

    Thank you.
    :)

  • @andreitonescu
    Totally forgot to mention how fun this app is to play with. It is very clever in its control options, and having each sequencer with its own unique settings really opens up all sorts of possibilities. You have done a really excellent job designing this app.

    Also, no crashes or glitches so far. Quite solid in its performance.

  • two image sequences driving Animoog. Worked well, but crashes consistently if you’re changing parameters while playing

  • @skiphunt said:
    two image sequences driving Animoog. Worked well, but crashes consistently if you’re changing parameters while playing

    Maybe try with some AU synths. I am driving Phosphor and Mersenne, at Max BPM, and one sequence on 1/32 notes. I can change everything up, while playing, with no crashes.

    I am using the smallest grid sizes, primarily.

    When I first read your post, I read ‘image sequences’, like movie or animation files. I was trippin’ for a second, wondering how you managed to pull that off.
    :D

  • @CracklePot said:

    @skiphunt said:
    two image sequences driving Animoog. Worked well, but crashes consistently if you’re changing parameters while playing

    Maybe try with some AU synths. I am driving Phosphor and Mersenne, at Max BPM, and one sequence on 1/32 notes. I can change everything up, while playing, with no crashes.

    I am using the smallest grid sizes, primarily.

    When I first read your post, I read ‘image sequences’, like movie or animation files. I was trippin’ for a second, wondering how you managed to pull that off.
    :D

    Oh, I've had success for sure. I was basically just posting some details from one that caused consistent crashing... as more of a bug report. I sent the same note to him via testflight too with a little more detail.

    Until now, I've been using different images for each sequence. This session that crashed was using the same image for both sequences. Doubt that matters though.

    It's not often "musical" but it's a great starting off point to get something going that works via all the algorithm settings, then build on top of that.

    Would be cool if you could designate one sequence to generate chords off one image, and notes from another image. Though, I suppose you could do that by sending the midi to a host with one of the newer midi apps that generates chords, etc. from notes.

  • @skiphunt
    Ok, cool.
    I just didn’t want you to miss out on the fun I have been having with this cool app.

    I think considering the image you are working with, RGB avg vs. Hue can give pretty musical results. If your image is high contrast, but less color variety, then Hue gives good results. If your image has a lot of color variety, then RGB avg gives more musical results.

    This is based on the idea that more or less flowing note streams sound more musical than jumping around to widely spaced notes.

    Also, if you set the scale root to ‘none’, the scale type is also ignored. You end up with chromaticism primarily.

    I am going to try to doctor up some images, by copying/duplicating areas around, to see if I can generate closely related or shorter, repeating sections. I also want to try throwing various gradients over some images to see if I can get a sort of modulation effect happening.

  • @CracklePot said:
    @skiphunt
    Ok, cool.
    I just didn’t want you to miss out on the fun I have been having with this cool app.

    I think considering the image you are working with, RGB avg vs. Hue can give pretty musical results. If your image is high contrast, but less color variety, then Hue gives good results. If your image has a lot of color variety, then RGB avg gives more musical results.

    This is based on the idea that more or less flowing note streams sound more musical than jumping around to widely spaced notes.

    Also, if you set the scale root to ‘none’, the scale type is also ignored. You end up with chromaticism primarily.

    I am going to try to doctor up some images, by copying/duplicating areas around, to see if I can generate closely related or shorter, repeating sections. I also want to try throwing various gradients over some images to see if I can get a sort of modulation effect happening.

    Mostly experimenting with fairly saturated and richly colorful images. I've been wandering around the Yucatan of Mexico for a little more than a month and have lots of fresh colorful images with loads of varied texture that I've just edited on my iPad to play with. :)

    https://www.instagram.com/skiphunt/

Sign In or Register to comment.