Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Which DAW gets you your sound easiest?

I’m curious if any of you have tested iOS DAWs this way: I’m currently beta testing a new version of n-Track studio(that now has new Guitar & Bass amps, and Vocal Pitch Correction). It all works pretty well, so I laid out a basic Drum track/guitar/bass/vocal song in n-Track and compared it to the same thing done in Auria Pro and Cubasis with all their IAPs. Now I realize I could probably get exact duplicates out of each if I worked hard enough, but I’m testing one that sounds the best to me with the amount of knowledge skill and time I have available, which is not much of any.
So far my front runner is Cubasis, but I’d like to hear your opinions about what you’ve experienced... please.

Comments

  • I've been creating music on iOS for just under five years now and I've tried a really wide range of apps. I seem to have settled into the following pattern of late, but of course it's subject to change and/or evolution:

    I sketch out a basic arrangement in either Garageband or Gadget, laying a beat, bass, some keys and synths. At this stage I like to create the musical backbone of the song, and the various parts. Working in these two apps makes it really easy to create the music with their virtual instruments, and also experiment with the structure because both apps let you change and reorder the sections of the song easily. It's also just so damn convenient working in an all-in-one environment. GB has the advantage of allowing for AU and IAA as well so it opens the whole palette of what's available to me. I'll record a rough vocal at this stage too, to work on the melody and lyrics.

    Once I'm happy with the music I export everything to Auria as stems (tedious with GB, easy with Gadget) where I mix, record the final vocals, and also add additional instruments as needed.

    For me this uses the apps I have to their main strengths, GB and Gadget are quick, immediate, and perfect for writing music. Auria is great for doing the actual mix. Works for me, YMMV.

  • Same as @richardyot, but probable I do a higher proportion of my composition/arrangement on Auria itselt because I’m quite familiar with Twin2 and have some great EXS and soundfont libraries for Lyra. But GB’s smart instruments are really an invaluable tool!

  • Grooverider. If i need a sample, I’ll get it wherever, but this app allows me to dial in what I want musically and sonically very quickly. Grand Finale works for mastering

  • This is a strange question to me (not in a bad way, just strange). I **never ** think of a DAW in terms of the sound. To me a DAW is just a vehicle for running apps and FX. It doesn't , or shouldn't, affect the sound in any way. It's not like the old days where mixing consoles actually colored the sound and varied from one to another. It's just bits and bytes now.

    On the other hand, virtually everyone who has shelled out for the Fabfilter plugins swears by them, and they are available only with Aura Pro, so I guess that could be considered. But I'll never invest in those as long as they're only available in Auria, so that question is moot from my perspective.

    So, for me it comes down to workflow only. On that score I'm still finding my way. I am completing a project per week every week I can this year. I start and end in either Auria, BM3, Gadget, or Cubasis. Or I start in GrooveRider, then take the audio clips over to another DAW and finish them there. As the very last step, everything goes to GrandFinale for mastering.

    So far, I really can't settle on DAW. They each have frustrations, limitations, and many, many strengths. But sound ... from a DAW? That doesn't even come into it. If there was a DAW that colored the sound without FX or plugins that I control, I wouldn't touch it with a 10 foot pole.

  • @wim said:
    This is a strange question to me (not in a bad way, just strange). I **never ** think of a DAW in terms of the sound. To me a DAW is just a vehicle for running apps and FX. It doesn't , or shouldn't, affect the sound in any way. It's not like the old days where mixing consoles actually colored the sound and varied from one to another. It's just bits and bytes now.

    On the other hand, virtually everyone who has shelled out for the Fabfilter plugins swears by them, and they are available only with Aura Pro, so I guess that could be considered. But I'll never invest in those as long as they're only available in Auria, so that question is moot from my perspective.

    So, for me it comes down to workflow only. On that score I'm still finding my way. I am completing a project per week every week I can this year. I start and end in either Auria, BM3, Gadget, or Cubasis. Or I start in GrooveRider, then take the audio clips over to another DAW and finish them there. As the very last step, everything goes to GrandFinale for mastering.

    So far, I really can't settle on DAW. They each have frustrations, limitations, and many, many strengths. But sound ... from a DAW? That doesn't even come into it. If there was a DAW that colored the sound without FX or plugins that I control, I wouldn't touch it with a 10 foot pole.

    It’s definitely a workflow question, I agree. You kind of have to imagine your ideal WAY of making music and then choose a tool that is as close as possible. So what do I need and what DON’T I need are the questions. I think the idea regarding the DAWs sound is more a factor of how easily can I get the sounds i prefer. The user interface is a huge factor, are the relevant parameters easily accessible? Are they organized in such a fashion where the user can quickly bounce back and forth between them? Are there things that must be set up at the beginning of every music session or is it “plug and play”?

    Then there’s tools like Auxy with stuff running in the background inaccessible to the user and you’re provided with nice sounds and some controls to tweak them, but not too much, but just enough to get a custom personal feel. At that point it comes down to how much time do I want to spend composing music versus how much time do I want to spend mixing and designing sounds from scratch? It’s a balance and some folks might lean one way or the other

  • edited April 2018

    I don't really do much "DAW" work.

    For me it feels antiquated.

    I am attempting to make my music as real to reality at possible and allow the platform to encompass the music presentation.

    The instant ability to create loses its luster if it is worked over for 8 hours on a DAW.

    THIS IS FOR ME I AM SAYING.

    I like to really do must things almost live or close to including 1 take recording when possible.

    I usually do a song into LoopyHD.

    Play each part and record the loops(8-12)

    Then send them over to Cubasis to quickly combine them or send them to Blocs and play it out on Launchpad.

    I am also a big user of single source songs.

    Apps I can do whole number in.

    KMachine
    Gadget
    Grooverider
    Takete
    Groovebox
    Dhalang

    What am I forgetting?

    iMaschine.
    iMPC pro
    Triq Traq
    Figure

    I find the sound more quality when I stick to one app.

    Less mixing level and compression of multi sources.

    I think the disparity of sound quality of apps is seldom addressed.

    One day I will bring it up

    I am in a hurry but wanted to post this

  • I do all my work with AUM and Audiobus and AudioShare. It can be cumbersome sans proper DAW but I just don't like the traditional DAW process. If I had to pick one it would certainly be Cubasis - not for sound - but for workflow. That said I haven't used Cubasis for anything in months. I do like the BM3 application but never really enjoyed MPC style sequencing.

  • @Mr_Beak said:
    I do all my work with AUM and Audiobus and AudioShare. It can be cumbersome sans proper DAW but I just don't like the traditional DAW process. If I had to pick one it would certainly be Cubasis - not for sound - but for workflow. That said I haven't used Cubasis for anything in months. I do like the BM3 application but never really enjoyed MPC style sequencing.

    Thank god for AUM

    It allows non specific song creation anywhere

  • @RUST( i )K said:

    @Mr_Beak said:
    I do all my work with AUM and Audiobus and AudioShare. It can be cumbersome sans proper DAW but I just don't like the traditional DAW process. If I had to pick one it would certainly be Cubasis - not for sound - but for workflow. That said I haven't used Cubasis for anything in months. I do like the BM3 application but never really enjoyed MPC style sequencing.

    Thank god for AUM

    It allows non specific song creation anywhere

    AUM is one of the true IOS benchmark apps. I never do anything on the iPad without it.

  • @Mr_Beak said:

    @RUST( i )K said:

    @Mr_Beak said:
    I do all my work with AUM and Audiobus and AudioShare. It can be cumbersome sans proper DAW but I just don't like the traditional DAW process. If I had to pick one it would certainly be Cubasis - not for sound - but for workflow. That said I haven't used Cubasis for anything in months. I do like the BM3 application but never really enjoyed MPC style sequencing.

    Thank god for AUM

    It allows non specific song creation anywhere

    AUM is one of the true IOS benchmark apps. I never do anything on the iPad without it.

    Swiss army fo sho

  • edited April 2018

    Honestly, I believe the "best DAW" is whatever method gets you to create music and, better yet, finish tracks. If a DAW does what you need it to do and then stays out of your way it's a pretty good DAW for you.

    Lately, i've mostly been producing in Gadget on my iPhone, exporting tracks, and mixing and mastering in Waveform on my PC (which I also used(d) for producing music, just less so since I sit in front of a computer all day for work I don't as much enjoy doing it even more so for music). I find it much easier for me, personally, to apply audio effects and see what's going on with a 27" screen - and to be able to mix effectively as well with all the faders on screen.

  • tjatja
    edited April 2018

    I seems, that i understood the OP in a totally different way than some of the above posters.

    The OP clearly wrote about laying out "basic Drum track/guitar/bass/vocal song" in a DAW which i would read as "in a sequencer", with Piano Roll, Step Sequencer and / or some Drum Sequencer and also, using the sounds that are available in that DAW.

    OK, this COULD also mean that you arrange pieces of Audio and not MIDI, but i do not believe, that the OP was refering to that.

    And in this context, AUM just does not work. You would need some other app for both MIDI, sequencing and the sounds.
    Also, this was not about any "colouring" of the audio.

    It was about "creating a song".

    And yes, as far as i understood n-Track, this is similar to what Audio Evolution Mobile or Cubasis do!
    Personally, i still like Cubasis the most.

    But, what does n-Track 8 [ Pro ] have to add in the next version?
    ;)

  • Auria because I'm primarily a guitar player and Saturn is the best amp modeler/saturator on the app store. So I'm all ready to go almost immediately.

    I used to try to use Cubasis to sketch stuff then do the real thing in Auria, but I'm just not good at recreating the sketches because there will always be some nuances to my playing that I immediately attach myself to in the first take when everything is really fresh. Then I either agonize over trying to re-achieve it (never works) or sit there unhappy with the newer, 'lesser' take.

    Both of them have the problem of a limited (4 in both I think) effects chain slots, so AUM often comes into the picture.

  • edited April 2018

    Thanks everybody for your responses so far. I forgot about Audio Evolution and its Pitch Correction. When the new n-Track is released it will be similar to that. I don’t think Cubasis has Pitch Correction for Vocals, right?
    Btw, I was referring to Audio recordings of those tracks, I use midi only when I have to.
    Ok I see the consensus is there is no “sound coloring” from DAW to DAW, but I hear differences even when recording a dry vocal using the same Mic and positioning from each DAW. Auria is cleaner and more clear, Cubasis sounds thicker and more meaty to my ears. Same with live instruments recorded in. By contrast, iOS instruments sound the same regardless. Maybe it’s just my crazy ears. I was referring to differences in the sounds of live recorded audio, be it from a microphone or from a line in through an interface. I’m sorry I didn’t spell it out clearly in the original post. Maybe I’m the only one who thinks different daws process microphone inputs with different amounts of headroom or default bit depth, or a built in Eq that is under the hood. Sorry if I wasted everyone’s time.

  • edited April 2018

    @NoiseHorse said:
    Thanks everybody for your responses so far. I forgot about Audio Evolution and its Pitch Correction. When the new n-Track is released it will be similar to that. I don’t think Cubasis has Pitch Correction for Vocals, right?
    Btw, I was referring to Audio recordings of those tracks, I use midi only when I have to.
    Ok I see the consensus is there is no “sound coloring” from DAW to DAW, but I hear differences even when recording a dry vocal using the same Mic and positioning from each DAW. Auria is cleaner and more clear, Cubasis sounds thicker and more meaty to my ears. Same with live instruments recorded in. By contrast, iOS instruments sound the same regardless. Maybe it’s just my crazy ears. I was referring to differences in the sounds of live recorded audio, be it from a microphone or from a line in through an interface. I’m sorry I didn’t spell it out clearly in the original post. Maybe I’m the only one who thinks different daws process microphone inputs with different amounts of headroom or default bit depth, or a built in Eq that is under the hood. Sorry if I wasted everyone’s time.

    Never a waste of time to look at these things. There’s only a few people around here that actually know what’s going on under the hood with the various DAWs and sound processes...(definitely not me) but I’m always interested to read peoples feels about it all.

  • BM3 is it for me! Add in Beathawk AU for the sick IAP samples (that Mellotron is bae) and Master Record/Lo Fly Dirt and I am fully satisfied

  • @Littlewoodg said:

    @NoiseHorse said:
    Thanks everybody for your responses so far. I forgot about Audio Evolution and its Pitch Correction. When the new n-Track is released it will be similar to that. I don’t think Cubasis has Pitch Correction for Vocals, right?
    Btw, I was referring to Audio recordings of those tracks, I use midi only when I have to.
    Ok I see the consensus is there is no “sound coloring” from DAW to DAW, but I hear differences even when recording a dry vocal using the same Mic and positioning from each DAW. Auria is cleaner and more clear, Cubasis sounds thicker and more meaty to my ears. Same with live instruments recorded in. By contrast, iOS instruments sound the same regardless. Maybe it’s just my crazy ears. I was referring to differences in the sounds of live recorded audio, be it from a microphone or from a line in through an interface. I’m sorry I didn’t spell it out clearly in the original post. Maybe I’m the only one who thinks different daws process microphone inputs with different amounts of headroom or default bit depth, or a built in Eq that is under the hood. Sorry if I wasted everyone’s time.

    Never a waste of time to look at these things. There’s only a few people around here that actually know what’s going on under the hood with the various DAWs and sound processes...(definitely not me) but I’m always interested to read peoples feels about it all.

    My feel is that different DAWs are running different DSP code, and therefore would have to sound different or color the sound. Live previews are previews, and of various quality. Then rendering uses proprietary code to generate a final audio file, and every DAW has a signature sound based on the influence of the differences in the proprietary codes that are used.

    It is the same reason why Model D and iMini sound different, even though they are trying to emulate the exact same hardware.

  • @CracklePot said:

    @Littlewoodg said:

    @NoiseHorse said:
    Thanks everybody for your responses so far. I forgot about Audio Evolution and its Pitch Correction. When the new n-Track is released it will be similar to that. I don’t think Cubasis has Pitch Correction for Vocals, right?
    Btw, I was referring to Audio recordings of those tracks, I use midi only when I have to.
    Ok I see the consensus is there is no “sound coloring” from DAW to DAW, but I hear differences even when recording a dry vocal using the same Mic and positioning from each DAW. Auria is cleaner and more clear, Cubasis sounds thicker and more meaty to my ears. Same with live instruments recorded in. By contrast, iOS instruments sound the same regardless. Maybe it’s just my crazy ears. I was referring to differences in the sounds of live recorded audio, be it from a microphone or from a line in through an interface. I’m sorry I didn’t spell it out clearly in the original post. Maybe I’m the only one who thinks different daws process microphone inputs with different amounts of headroom or default bit depth, or a built in Eq that is under the hood. Sorry if I wasted everyone’s time.

    Never a waste of time to look at these things. There’s only a few people around here that actually know what’s going on under the hood with the various DAWs and sound processes...(definitely not me) but I’m always interested to read peoples feels about it all.

    My feel is that different DAWs are running different DSP code, and therefore would have to sound different or color the sound. Live previews are previews, and of various quality. Then rendering uses proprietary code to generate a final audio file, and every DAW has a signature sound based on the influence of the differences in the proprietary codes that are used.

    It is the same reason why Model D and iMini sound different, even though they are trying to emulate the exact same hardware.

    Not sure that differences between synth’s sound qualities are completely analogous to differences in audio between DAWs- I’m assuming an attempt for some semblance of transparency on the DAW tip but again I don’t count myself among those in the know on these things

    Here’s an interesting take on audio differences among big boy DAWs

    https://www.image-line.com/support/FLHelp/html/app_audio.htm

  • @Littlewoodg said:

    @CracklePot said:

    @Littlewoodg said:

    @NoiseHorse said:
    Thanks everybody for your responses so far. I forgot about Audio Evolution and its Pitch Correction. When the new n-Track is released it will be similar to that. I don’t think Cubasis has Pitch Correction for Vocals, right?
    Btw, I was referring to Audio recordings of those tracks, I use midi only when I have to.
    Ok I see the consensus is there is no “sound coloring” from DAW to DAW, but I hear differences even when recording a dry vocal using the same Mic and positioning from each DAW. Auria is cleaner and more clear, Cubasis sounds thicker and more meaty to my ears. Same with live instruments recorded in. By contrast, iOS instruments sound the same regardless. Maybe it’s just my crazy ears. I was referring to differences in the sounds of live recorded audio, be it from a microphone or from a line in through an interface. I’m sorry I didn’t spell it out clearly in the original post. Maybe I’m the only one who thinks different daws process microphone inputs with different amounts of headroom or default bit depth, or a built in Eq that is under the hood. Sorry if I wasted everyone’s time.

    Never a waste of time to look at these things. There’s only a few people around here that actually know what’s going on under the hood with the various DAWs and sound processes...(definitely not me) but I’m always interested to read peoples feels about it all.

    My feel is that different DAWs are running different DSP code, and therefore would have to sound different or color the sound. Live previews are previews, and of various quality. Then rendering uses proprietary code to generate a final audio file, and every DAW has a signature sound based on the influence of the differences in the proprietary codes that are used.

    It is the same reason why Model D and iMini sound different, even though they are trying to emulate the exact same hardware.

    Not sure that differences between synth’s sound qualities are completely analogous to differences in audio between DAWs- I’m assuming an attempt for some semblance of transparency on the DAW tip but again I don’t count myself among those in the know on these things

    Here’s an interesting take on audio differences among big boy DAWs

    https://www.image-line.com/support/FLHelp/html/app_audio.htm

    That is so funny that you linked an ImageLine link regarding this. The reason I feels like I do is years back when I told a Logic user that I use FL Studio, he tried to explain that Logic had superior coding and it could be heard in comparing the final output of the 2 DAWs.

    I don’t know how true any of this is, but it makes logical sense if you think about it. It also lines up well with my experience in different 3D graphics programs that each had its own proprietary render engine. Each program had notable differences in final render quality.

    I also feels that while an attempt at transparency is most likely the case, it does not mean that it is actually achievable. There are most likely way too many variables involved.

    I guess I feels a bit pessimistic about the whole topic.

  • Thanks Doug ,great link re DAWs

  • @CracklePot said:

    @Littlewoodg said:

    @CracklePot said:

    @Littlewoodg said:

    @NoiseHorse said:
    Thanks everybody for your responses so far. I forgot about Audio Evolution and its Pitch Correction. When the new n-Track is released it will be similar to that. I don’t think Cubasis has Pitch Correction for Vocals, right?
    Btw, I was referring to Audio recordings of those tracks, I use midi only when I have to.
    Ok I see the consensus is there is no “sound coloring” from DAW to DAW, but I hear differences even when recording a dry vocal using the same Mic and positioning from each DAW. Auria is cleaner and more clear, Cubasis sounds thicker and more meaty to my ears. Same with live instruments recorded in. By contrast, iOS instruments sound the same regardless. Maybe it’s just my crazy ears. I was referring to differences in the sounds of live recorded audio, be it from a microphone or from a line in through an interface. I’m sorry I didn’t spell it out clearly in the original post. Maybe I’m the only one who thinks different daws process microphone inputs with different amounts of headroom or default bit depth, or a built in Eq that is under the hood. Sorry if I wasted everyone’s time.

    Never a waste of time to look at these things. There’s only a few people around here that actually know what’s going on under the hood with the various DAWs and sound processes...(definitely not me) but I’m always interested to read peoples feels about it all.

    My feel is that different DAWs are running different DSP code, and therefore would have to sound different or color the sound. Live previews are previews, and of various quality. Then rendering uses proprietary code to generate a final audio file, and every DAW has a signature sound based on the influence of the differences in the proprietary codes that are used.

    It is the same reason why Model D and iMini sound different, even though they are trying to emulate the exact same hardware.

    Not sure that differences between synth’s sound qualities are completely analogous to differences in audio between DAWs- I’m assuming an attempt for some semblance of transparency on the DAW tip but again I don’t count myself among those in the know on these things

    Here’s an interesting take on audio differences among big boy DAWs

    https://www.image-line.com/support/FLHelp/html/app_audio.htm

    That is so funny that you linked an ImageLine link regarding this. The reason I feels like I do is years back when I told a Logic user that I use FL Studio, he tried to explain that Logic had superior coding and it could be heard in comparing the final output of the 2 DAWs.

    I don’t know how true any of this is, but it makes logical sense if you think about it. It also lines up well with my experience in different 3D graphics programs that each had its own proprietary render engine. Each program had notable differences in final render quality.

    I also feels that while an attempt at transparency is most likely the case, it does not mean that it is actually achievable. There are most likely way too many variables involved.

    I guess I feels a bit pessimistic about the whole topic.

    I feels similarly.
    And I get the funny
    I was cruising the Renoise forums of late and there’s a sticky there (ongoing for years) that asks “Why Does Renoise Sound So Good?”. I think that’s where I found the IL article, if not here on this very forum (or on the IL forum itself, where I represent as a proud FL user as well)

Sign In or Register to comment.