Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Anyone interested in a Melodyne-style iOS app?

24

Comments

  • I'd pay a lot for Melodyne on iOS. Apple Pencil would be great for edits.

    Auria might be interested in this. They need something new to entice their pro users now that FabFilter is releasing AUv3's. I would love for ZenBeats to provide this.

    I strongly prefer to use it as a plug-in (e.g., in Reaper on desktop) but I can't imagine that AUv3 will support the kind of interconnect that is needed. Even on desktop, Melodyne prefers ARA2, which was designed for this purpose by Celemony and PreSonus.

  • edited June 2020

    Afaik the Celemony algorithms are patented - so at least one can spare the effort to reverse engineer the stuff...
    Best of luck in finding a functionally identical alternative ;) o:)

  • @danielfromcodalabs said:

    • Are you interested? Why or why not?
    • How much would you pay for an app like this?
    • What features are “must-haves”? Meaning if the app doesn’t have this feature, you wouldn’t download it.
    • What would be an ultimate “killer” feature? Even if it defies the laws of physics. Think outside of the box :)

    Yes.
    I own the big Melodyne package, so wrong person to ask.
    Audio to MIDI export.
    Your own take on ARA for iOS.

  • @JRSIV said:

    My vote would also be towards a post production interface rather than a live tuning facility, meaning once a vocal track is done it can be run through the tuner or imported into it and then a graphical piano roll representation of the notes of the vocal could then be changed along the piano roll.

    This, yes please. £30. Or £50.

  • @mojozart said:
    I'd pay a lot for Melodyne on iOS. Apple Pencil would be great for edits.

    Auria might be interested in this. They need something new to entice their pro users now that FabFilter is releasing AUv3's. I would love for ZenBeats to provide this.

    I strongly prefer to use it as a plug-in (e.g., in Reaper on desktop) but I can't imagine that AUv3 will support the kind of interconnect that is needed. Even on desktop, Melodyne prefers ARA2, which was designed for this purpose by Celemony and PreSonus.

    The guys at WaveMachine Labs have always been cool and helpful but getting them to adopt user requests takes a long time. I totally agree with @mojozart, they should try to reach out to Celemony, even if just for shits and giggles.

    Along with describing the Auria Pro platform and the prior relationship with FabFilter they could absolutely pitch some kind of IAP or integration of the basic Melodyne tuning program. I agree Auria kind of needs a new "hook" now that every DAW can use FabFilter plugs (their former exclusive) via AUv3.

    I know we hear rumor's of an iPadOS version of Logic and that the next round of iPad Pro's will be these ultra powerful beasts but not everyone would jump aboard if it was available now. I like Logic more than GarageBand iOS, the weird routing & lack of a mixer and traditional DAW design is not an issue with Logic, however...Logic is in use all over the world and people paid $200+/- for it, Apple won't want to give that up so easily. So even if we get it there may be a price point many don't want to cross.

    Cubasis, Auria, BM3 and NS2 will still have user bases, ESPECIALLY if the companies do some cool additions to keep them there. A monophonic professional vocal tuning plug in would be a big-time addition to any of the current DAW's.

  • edited June 2020

    -Yes.

    • All the features I would like to see have already been mentioned.

    • Price: I would pay FF ios prices for this app without even thinking about it.

  • yes - so long as it can use audiokit Tuneup and/or scala or tun tuning files.

  • edited June 2020

    wierd thread, u will get too many chefs wanna add everything

  • I think as you do your personal research you'll discover a lot of tasks you can accomplish using batch oriented analysis:

    1. isolate "noise" and remove it
    2. determine the loudness contours of a track and produce a version that targets a precise
    3. volume max like the -LUFS 14 standard Spotify applies to all upload tracks to deliver unform loudness
    4. detect occurrences of specific pitches that are slight out of tune in reference to some scale standard and modify the offending pitches
    5. time shift a track from one realtime measured standard to another and none of the pitch or time relationships get changed... some people would like to take ten seconds and stretch it to last a minute for example. Large values of stretching or shrinking could be useful
    6. batch alter a file to deliver a more balanced and distinct "mix" which assumes you can detect instruments and scope their EQ's to have a defined space of spectrum so more instruments are uniquely presented in the final mix and each is improved as a by product due to less spectrum overlap. Mastering engineers seek to do this for each track processed but automating it for optimal benefits could be an important app. But a tricky analysis problem. Maybe you'd want to work from stems of each instrument and create a "mix" but if you can operate on a wave file you have a unique value prop.
    7. remove vocals, instruments, drums and isolate one or more instruments in the deliverable wave file.
    8. De-FX a track, removing reverb, delays, phasing, tremolos and productive a result that sounds very close to a simple mic'ed audio file in a reflection less room.
    9. Change the spectral image of a singer producing changes in gender FX.

    Each of these discoveries could be sold as an app while you continue to work towards the ultimate goal of a polyphonic pitch altering tool: ideally interactive but potentially one that accepts parameters and operates offline to do the work and produces a wave file result.

    Simple discovered tools could be < $10 but the closer you get to Melo-dyne like functionality the more I'd pay $50 for it and consider that a bargain. If it's great the price isn't the issue if you need the tool and it can help generate income by improving musical projects or finding more clients.

    You might even discover analysis and synthesis tools that no one has discovered yet doing fundamental analysis of musical sources.

  • @palms said:
    yes - so long as it can use audiokit Tuneup and/or scala or tun tuning files.

    Indeed. What’s the point if you can’t tune your tuner?

  • @noob said:
    wierd thread, u will get too many chefs wanna add everything

    Just trying to see what the most common requests are

  • I'd love to see that app for iOS and I'd be happy with Melodyne Essentials features

  • I am interested.

    Price: Fab filter range is max I would pay.

    Features: Having never used melodyne or similar, I would say that monophonic is good enough for me.

  • edited June 2020

    This kind of app is definitely needed on IOS
    I just threw money at my IPAD screen........TAKE IT!....LOL!

  • edited June 2020

    Hi Daniel,

    Nico here, we talked some time ago wrt Looper 7, thanks by the way !

    Here is an outside the box idea that I hope to see sometime : a function/AI that keeps the tuning/correction process from sucking the life out of takes, or adapting to what and how something has been sung/played.

    It is not a short post, so buckle up.

    Often, in less chart-oriented music, lively/world/natural/indie/experimental genres, in takes with the right emotion, interpretation, there might be just a few notes that can be too off (pitch wise, rhythm wise), but once you start tuning those notes, you get into microscopic listening mode, you can not unhear that level of sonic zoom and detail, at least not for some time (for me between a couple of hours or a day depending the depth of editing), and from there it is a slippery slope when too much editing/correction is very easy to fall into as you hear everything related to a grid or not lining up visually with the theoretically correct position in pitch or timing instead of the musical context, the big picture.

    Mind you, some artists, labels, engineers really want or ask to clean up everything to some degree as an integrated method, element of the production values, that’s fine, whatever floats anybody’s boat.

    In my opinion the systemically cleaning up of voices according to tight pitch and rhythm grids, presets, or out of habit, by comparison of existing levels of vocal processing or stylistic referencing can be counterproductive to the intent of a song’s true identity and its final delivery capabilities.

    But what if there was some kind of AI that could show you, highlight, just which few notes can be slightly/more off pitch/timing/drift wise, and when do you start to overdo its correction, according to an average range of non-corrected parts of the sound pitchmap (or tolerance) in a musical perception context?

    Of course this is highly subjective, but what if the AI could point you in some directions : from the most subtle adjustments to full on auto tune step/scaled and ruler-flat pitch variations?

    The AI would take care of that microscopic, analytical mindset of most of the pitch/timing correction, so the operator doesn’t have to and keeps focus more on right brain activity than left brain, while remaining free of diving into correction-land if need be.

    The AI would have to learn historically important and diverse singers and songs, before there was any artificial tuning, from La Callas, Bessie Smith, Ella Fitzgerald, Aretha Franklin to name a few up to modern strongly auto-tuned performances in different genres and degrees of processing.

    By learning, the AI would be able to provide ranges of possible processing by genre, timeline, liveliness (and favor certain processes or not : pitch drift, breath clean up, clean up a glissando?emphasize it? leave as is? etc...) for desired formats, results and give immediate visual feedback if the user is deviating from those ranges.

    And what if the AI could adapt to your specific tastes in pitch and timing correction by feeding it songs you like?

    I work with a quiet a few jazz/classically trained singers, well, up until last February and hopefully again in September this year, and some of them can hear the most minute vibrato correction, or that a note becomes too grid-correct and that it being slightly flat gave them the right feel for their song. Even they can get caught up in : what about this note, it looks (on the screen) or sounds a bit flat no? And there you go, potentially undoing characteristics/ tweaking parameters that possibly did not need to be touched

    On a slightly parallel note, I learned not too long ago that in older classical piano recordings, before piano tuners worked with visual aids and apps to show them the theoretically perfect tuning, they would thus only tune by ear, and some great tuners/producers/artists went as far as tuning the piano differently per song on albums, based on the key and probably the musicians wishes, making certain notes of certain scales for a song be slightly flat or sharp so when they would be played and emphasised at specific moments they could convey an extra degree of sadness, joy, light, darkness etc...

    In the same vein, we have different piano tuners for the studio here, mainly based on sometimes late booking/last-minute availability. One does a great job with an app and partially by ear, making the piano a bit brighter in the process, but I noticed once that while all the notes were close to perfection, on some chords of a piece a pianist recorded later that day, there was a slight vibrato in the higher harmonics, while the notes by themselves or divided in intervals were fine, up to the point I had to slightly retune a few strings, so theoretical perfection does not always match musical context.
    Another tuner, my favourite one , only works by ear and pitchfork and almost unanimously, players are time after time impressed with the voicing, as it sounds right, though it might not be 100% accurate.

    That degree or method of ear/expression based tuning is not present in software, while the technology is there or almost to incorporate/learn those theoretically imperfect, but time-proven human musical approaches.

    (So harmonic note separation and repitching/rescaling (like DNA from Melodyne) is a big yes for me, but not restricted to iOS preferably)

    In fact the same inspiration in human based musical interpretation was incorporated primitively early 2000s with MIDI groove extraction of drumming parts (cfr Funky Drummer’s timing and its many derivatives with sound replacement). A few years ago, also present in Melodyne (and others), but not always easy to get going : tempo mapping.
    This marked a departure from necessary tight, fixed click based production from the start as, for instance a drummer, or full band could play and record a few guide takes : from there with tempo mapping, the inherent and fluctuating tempo variations could be, well, mapped and the DAW would still show grids but changing/following the tempo according to how the band played. This is very significant as it means the DAW became finally receptive to those tempo changes, from subtle to intense, allowing for any overdubs to be done without headaches (only parameter to sometimes set up would be creating a metronome countdown based on the upcoming tempo, not necessarily a preroll at another tempo), or MIDI based production elements to be added following the musicians and not the other way around, which leaves room for much more vivid and unique productions.
    A few years back, after a long session, the band was happy and left late, only to tell me the day after, with most of them not being available to come back to the studio : “Dang, we forgot to overdub the shaker mid-song, can you add one?” All this in an 8 minute Afro-groove song with pretty wildly accelerating tempo. My percussion skills being subpar, I tempo mapped their session, and added a simple MIDI based 1-bar shaker loop from StylusRMX from the middle of the song to the end. It sounded great, I couldn’t believe how lively and real it sounded, while the one bar loop just followed the tempo, real ear-opener. The band was amazed, thought it was a very good percussion player that did it.

    (On a side note, tempo mapping integration would be important for me, and one that succeeds without too much finetuning as most are not as easy to do as advertised)

    Think about it : with fixed tempi, quantise, 12 perfect tones you have a finite number of possible interpretations leaving all the variations to the sound itself (other debate but then you have the identical virtual instruments/preset use which can be limiting)- but remove the tight snapping to those 3 aspects (like tempo mapping allows now for DAW session tempo) and you have infinite possibilities of interpretations, uniqueness and degrees of deliveries, even for identical material, like before grids and auto tune only with control over it.

    But you know what happens when you open the cage of a canary that has stayed long in there, right?

    I still see an attachment to fixed click, that is there out of habit for many live musicians/bands.

    Not saying correction is bad, but more measured/assisted correction, with AI seems a move forward

    Back to our AI : what if it could have a learned library of timing/pitch deviations per genre, period, even scales/keys as previously described (for instance with mappings of a microtonally adapted C minor that sounds more sad, or open, uplifting, based on the slightly retuned relationship between the notes, based on an older and classic recording of classical, rock, jazz) and apply some degree of imperfection to an otherwise very precise take?

    Why, because, I don’t know if you guys have noticed, since all those singing-reality shows and intense normalised use of auto tune, that some young singers, having this as a reference, learn to sing to their fav song of the moment, and doing so are sometimes able to sing perfectly straight notes, vibrato-free with incredibly precise pitch that would have been other worldly generations ago? Not saying it is bad, it is sometimes even very impressive and musical. But what about intrinsic human micro variations that make singer a sound different from singer b?

    Aren’t we adapting more to technology instead of technology adapting to us?

    I’d love to have the option to not just tighten up things, but also ability to loosen up things.
    And have a guide in the process that takes care of the analytical precision, to keep focused on the music

    Yes to many Melodyne options and more, in the end ;)

    I’d pay up to 100 on iOS, more on MacOS/Win

    Now, as much as I am put off by/weary of subscriptions, if developing such AI features proves expensive, what about a rent-to-own through AppStore : you pay x number of months 5 or 10 and then the software is yours.

    Developer gets investment back, customers spread the cost (or pay 5-10 now and then to use it a month) and are not trapped in forever subscriptions?

    Thanks for your time and open mindset Daniel

    Best

    Nico

  • @danielfromcodalabs said:
    Hi guys, I’m Daniel Kuntz, founder of Coda Labs (codalabs.io). You might know me as the developer of L7 and AudioTune.

    I’ve been thinking about a more “pro” version of AudioTune, and something that comes up repeatedly in my discussions with users is Melodyne. I’m trying to gauge interest for a Melodyne-style app on iOS and understand what features are important to iOS musicians. This forum seems like a great place to start a discussion.

    There is one limitation that I am not sure how to overcome, and that is polyphonic detection and manipulation. Melodyne calls this Direct Note Access (DNA). I am incredibly interested in finding out how the DSP works, but I can't promise I'll be able to replicate almost a decade of PhD research in any reasonable amount of time.

    So for now, please assume this is a monophonic pitch correction tool that allows you to edit individual notes and pitch contours in a graph-style UI.

    My main questions:

    • Are you interested? Why or why not?
    • How much would you pay for an app like this?
    • What features are “must-haves”? Meaning if the app doesn’t have this feature, you wouldn’t download it.
    • What would be an ultimate “killer” feature? Even if it defies the laws of physics. Think outside of the box :)

    Interested, basically Something modeled on Melodyne essential is more than enough

    30 USD

  • There was Nika by Ruben Zilibowitz (I think not available anymore) that was trying to have a similar approach to Melodyne. But it was FAR from being a satisfactory experience.

  • Interested, with a cap of 20$,
    Thank you!

  • If you’re able to come anywhere near the features and quality of Melodyne, and do it on IOS, you could pretty much name your price and I’ll pay it.

  • @danielfromcodalabs Any updates you can share with us? :)

  • Definitely interested in this x10.

  • @danielfromcodalabs
    I would start with a high quality monophonic pitch and formant shifter AUv3 plugin at around $10.
    There are several options out there and they usually work well for smaller shifts but shifting up to one octave up (and down) usually sounds very bad. If it works out, you can still go for a higher specced app after that and proceed into the Melodyne territory.

  • Appreciate everyone’s continued interest but after some thinking I’ve decided to not pursue this right now. Kind of taking a break from music apps. Maybe sometime soon!

  • edited July 2020

    @richardyot said:
    One important caveat is that Melodyne is usually an offline process in a separate app. So no need to make this Auv3 IMO.

    On the desktop I've used both Melodyne and Revoice Pro, and I prefer Revoice Pro. It has some features that Melodyne doesn't have, such as the ability to line up double-tracks and also edit the energy of the sounds (so you can level things out). It also automatically detects non-pitched sounds such as "S" sounds and allows you to edit their level but doesn't try to pitch them.

    I don't think polyphonic features are important at all, and I don't think many people use those features in Melodyne. The important thing is the ability to correct pitch without leaving too many artifacts.

    Important features for me:

    Ability to control the level of pitch correction, from slight to strong, using a slider
    Ability to smooth join separate notes so that there aren't sudden jumps in pitch (which is the tell-tale sign of pitch correction)
    Ability to control where the notes are split
    Ability to selectively revert some notes back to their original state
    Ability to optionally map the notes to a scale or a selection of scales.

    Maybe it’s been a while since you’ve used melodyne...but the “s” thing is there for sibilants (version 5) and multiple track has been there since the very introduction of melodyne studio. A large number of people use melodyne polyphonically...including myself. I know folks who use melodyne studio to do beyond acid pro type stuff. I personally use melodyne to render midi from audio, also to remove sounds from loops I don’t want. Melodyne is powerful stuff. I would also like to mention that with LPX and studio One ARA exists and integrates melodyne with your daw as well...so the offline thing is also disappearing.

  • 19.99 is going to be the sweet price, some people will pay 39.99 as shown by other developers such as fabfilter. Let’s also consider it’s an established company, with tremendously useful direct daw quality plugs.

    If it isn’t au, it’s a no for me...jumping around too much on iOS is not my love. I have almost erased everything that doesn’t work together thus far...I never was a fan of iaa, it’s always a suck fest using that. <—hence Apple killing it off soon.

    I think a lot of folks here would go for a monophonic vocal tuner application, but so far it’s sounding more Auto-Tune, than melodyne. Basically like your existing app with a grid editor and a few more features. Sure why not. It doesn’t really have any competitors as of yet. When the platforms merge however it will. <—-and they will merge!

    As stated above melodyne is rather pricey, so even after a merge, if the app is useful and priced right...I can see many still buying it and using it.

    I’d much rather see some tools such as vocalsynth2 or Ovox myself...or something that can realllly get that tube in the mouth talk box down pat.

    I’d also like to see a really dope sample manglers tool finally surface for iOS. Something that can compete with iris2, Apple alchemy, kontakt, and Roland Vsynth. So far nothing like this is in existance either on iOS and the roland vsynth doesn’t even exist on daw... just granular synths, phrase samplers, and multisamplers...

    Meh

  • @danielfromcodalabs said:
    Appreciate everyone’s continued interest but after some thinking I’ve decided to not pursue this right now. Kind of taking a break from music apps. Maybe sometime soon!

    That’s unfortunate. Really big fan of your work & hope to see that change!

  • @danielfromcodalabs said:
    Appreciate everyone’s continued interest but after some thinking I’ve decided to not pursue this right now. Kind of taking a break from music apps. Maybe sometime soon!

    That's a shame it's definitely missing on this platform. I just used Audiotune for some vocal cuts on a sample pack I was working on, so good. So easy to use and light weight. Anyway, if you do get around to an "inspired by" Melodyne app I'll have cash waiting for sure

  • Yes...I’d definitely love melodyne type app...and would pay around 30 usd ...or more....
    Something for the future hopefully.....

  • edited August 2020

    @danielfromcodalabs said:
    Appreciate everyone’s continued interest but after some thinking I’ve decided to not pursue this right now. Kind of taking a break from music apps. Maybe sometime soon!

    I can understand being burned out on music app creation. Maybe once you get your second wind, this'll be the project you tackle first? A Melodyne-styled app is much needed on this platform. As I said, I'd pay $30-$50 for the monophonic version at least. Polyphonic version can always come later on down the line as an IAP. Cheers. :)

  • Yes, all i really need is this.

Sign In or Register to comment.