Audiobus: Use your music apps together.
What is Audiobus? — Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.
Download on the App StoreAudiobus is the app that makes the rest of your setup better.
Comments
I would be really interested to hear other developers’ take on this.
(cc @Lars @woodman @Chris_Randall @sundhage)
Which one would you choose?
The one that creates better graphics or the one that sounds better?
Loaded question
?
In other words: Would you judge a sound, a synth, an effect by how it looks on the oscilloscope or by how it makes you feel when you listen to it?
It certainly is
This a fair and reasonable point. The sound should not be judged by its appearance on a graph. The visual tools should only be used as tools for training your ears. Once one has learned to hear the details in sounds, the graphical tools become unnecessary. They are sometimes useful as confirmation though; when we wonder, “Am I hearing what I think I’m hearing?”
While that can be a valid artistic choice in certain situations, in other situations (primarily mixing) the oscilloscope is showing you stuff that could mess up your mix, even if you don't hear it directly. Transparency is what you want on your master bus and for glue compression.
Also, I don't know that you ever really want aliasing on a compressor. It's impossible to eliminate aliasing entirely once you allow distortion (unless you oversample, but that brings its own problems), so to some degree it's a matter of trade offs, but as you can see in the video some developers do a better job of creating pleasing distortion, while minimizing aliasing. Also there are some compressors (the recent DDMF one is a good example) that will sound good so long as your sound signal doesn't have any frequencies above a certain level. This is useful information - particularly as if you don't care about those upper frequencies you can filter them before using the compressor (which will get you a better sound). And certainly you are not going to want to apply saturation before using that compressor.
Obviously. And @Blue_Mangoo demonstrated that the visuals provide a glimpse as to what was behind audible artifacts. The question seemed to imply that his analysis and conclusions were based simply on visuals which wasn't the case.
Additionally, any of us over 30 don't hear some things that scream to younger people. So, you can sometimes use spectographic analysis to see things that you can't hear but others can.
While someone could misrepresent spectrographic analysis, I don't see evidence of that hear. He was pretty clear in the original video about at least one case where the audible artifacts were trivial but showed up in the graphics and said that the artifacts wouldn't be heard.
If there is evidence that the conclusions weren't accurate, I am very interested.
I don't question the accuracy of theoretical, pure sinewave measurements.
Doing this kind of measurements can help tremendously in learning how stuff works.
What I question though is that you can do a compressor shootout using such measurements in order to find the one that works best for all kinds of real-world audio material, and I dare to believe that it's the idea behind this effort.
Also, I highly doubt that a spectrum analyzer can make up for the ear's reduced sensitivity to high frequency content. Most music is simply too dynamic and sounds are too different in each song as to being able to have simple rules for interpreting a realtime spectrum.
I've done scientific work on spectral analysis and I know how hard it is by today to even develop an algorithm that "simply" splits a song into its individual instruments without nasty side effects.
A spectrum analyzer can be helpful in finding exact frequencies that you know exist in your track, in "seeing" noise floor, THD measurements, loudspeaker and EQ design and some other applications but I often see people using them in situations where they add no benefit except nice graphics to watch.
>
Ok. I acknowledged that people sometimes do that. But that doesn't seem to be the case here. If you think the analysis is flawed in this actual case, how about demonstrating that to us.
You uncover a lot of flaws with a compressor using this kind of analysis, which is why audio engineers do them. There is nothing in these analyses that wouldn't also apply to real world signals. You can literally prove this mathematically.
There are other things for which this analysis would not be appropriate and you would use different tests (envelope response for example). Though these things are more subjective up to a point.
Not sure what the point you're trying to make here is, but aliasing will be heard a lower frequencies that you absolutely can hear as 'harsh' inharmonic sounds. Typically people describe it as digital noise, or harshness. It certainly does not sound 'analog', which matters if a plugin is marketed as having an analog sound.
How much aliasing matters depends upon the input signal, and how much of it there is. If you can keep aliasing at around the noise floor, most people won't worry about it. If aliasing only affects higher frequencies then your compress may sound great for some instruments, while sounding like total garbage for instruments with higher frequencies (something you could literally hear with the DDMF compressor).
I though the video actually did a great job of clarifying all these issues and was very fair.
Building a spectrum analyzer accurate enough for the kind of work that Blue Mangoo was doing is pretty trivial. Splitting a song into individual instruments is in a completely different category. I know how to build a spectrum analyzer (and have done so - because I have strange hobbies). No idea how to even begin with one which splits a song into instruments.
The use of the spectrum analyzer in the video seemed totally appropriate.
Compressor Audio Unit by Ngo Minh Ngoc
https://apps.apple.com/ar/app/compressor-audio-unit/id1467711076?l=en
@Blue_Mangoo : I am getting around to seeing if Audulus is generating a clean enough sine and sine sweep for the type of analysis you did and whether SpectrumView is up to the task. Is there a free AU plugin that you can think of that would introduce bad aliasing that I can use to check out ut my set up?
Thanks.
God I love this forum
Can you explain the image a bit. I see white edges and a red core to the graph lines. Does this imply
a range of values for a lot of cycles or a comparison of a pure input to an output? Is this a desktop tool
for spectrum analysis like the BlueMangoo guy used? If the details are in the prior thread comment I'm sorry.
Nice. Are you feeding it audio files or do you have a way to feed it a live signal from Audulus?
If files, how do you record them? Again, too lazy to read the whole thread.
Yes. My tests are also showing a clean sine ... cleaner than I expected. Hence, I want to see if that means the sine is clean or if SpectrumView is not sensitive enough (or something to do with settings).
Didn't you have an earlier test in which there were artifacts in the Audulus signal or am I mis-remembering?
What is different about how you generated the sounds that aliased and those that didn't?
@tja: btw, it seems to be that there is something screwy about spectrumview's audio playback. there is distortion and aliasing that I hear when it plays the test sounds (including a test sweep I downloaded) that aren't there when played with other apps. Do you experience that also?