Audiobus: Use your music apps together.
What is Audiobus? — Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.Download on the App Store
Audiobus is the app that makes the rest of your setup better.
Anyone wrote a an editable toggle script for Launchpads?
One that is responsive to lights?
It will be useful for Me for mutes.
I assign nanoKontrol2 mutes in Aum. Apparently there’s no ways to make them responsive.
I’de like to use as less controllers.
So I can probably drop nanoKontrol2 if I could just control 22 different mutes with it. That could fit in 3 rows.
The rest rows ide like to use as a keyboard to control different channels simultaneously.
I’ve used the channel splitter script for that.
How can I have permanently lights on 5 or 6 rows each in different color.
Would be cool when I play the pad changes color.
Got some more ideas so anyone want to collaborate it would be amazing.
Totally forgot about this! Just gave it a try and it seems to trigger all the notes at once which makes an awesome chord, but not a Shepard tone. Unless I’m doing it wrong?
I imagined that one would play it in the following way: Pays c2, the d2, then e2,f2,g2,b2 and back to the original c2 and you should get a constant rise, since c2 and c3 will produce an identical note/velocity ‚chord’ combination. I was after a manually played shepard scale not a single rising shepard tone.
Or maybe my idea didn‘t work out
Ah sorry I had a single rising Shepard tone in mind when you hold a note. I might be able to figure it out from what you wrote.
The secret of shepherd's tone style music is to do this:
1. have an instrument play any melody that start on a low note and keeps rising
2. as the melody rises lower it's volume like the slow fade on a recording that does stop the music. it just becomes inaudible. So, w.r.t. MIDI just lower the Velocity to zero for just that ONE voice/channel.
3. about 1/2 way through the 1st voices "rise" start another voice.
4. This second and all subsequent voices have a velocity/volume arc that's low initially and rises in volume as the 1st voice slips away it has hit it's peak velocity/volume.
You end up with an orchestra of instruments that never play a descending melodic pattern. You would never take one voice and drop back. You could re-introduce it with the volume arc... slipping it back into the ascent.
This should be trivial with Mozaic once you decide how the "notes" are determined. I can thing of many ways to do that including only forwarding notes that are above the currently played notes. That would really limit melodic ideas. An ascending melody overall can have
occasional downward intervals and still have an overall ascending pattern like a rising stock price over 10 years like Apple.
I'm having a blast creatively layering synths in AUM.
I'm using the ADSR envelopes of each given synth to create varying lengths of delay, before a given synth adds its sound contribution to the layering timeline.
This allows me to build up sounds in a somewhat wavetable like manner, but it's much deeper because I can choose whether a synth will sustain until the key is released. Or just have a synth contribute a brief "flavor" of sound somewhere in the sound-stream, and then fade out while another synth rises in to add another element to the layer.
These sounds I'm building can be short, tapering off, or continuously sustaining.
Here is the idea for the Mozaic Script......
Using the knobs layout. That looks like this..
O O O O O O O O O O
O O O O O O O O O O
Each column of two knobs represents a midi channel.
Let say we made the script to control five midi channels... It look like this..
O O O O O
O O O O O
1, 2, 3, 4, 5
Here's how it works...
When key(s) are played, the midi from the source keyboard is routed into Mozaic on a single midi channel.
The upper knob has an off position when turned all the way to the left. Turning right, turns that channel on, and is then used to adjust a delay time until a note on event... from 0 to 5 seconds (adjustable in 25ms increments).
The lower knob adjusts the duration of the note on event from something like 1 to 10 seconds, with full sustain on the far right.
The purpose of this program is to allow sequencing the timing of note-on and note-off events, relative to a key press event, controlling the timing of the sounding of a various number of synths loaded into AUM audio slots.
I think the complex part of writing such a script, would by working out how to play polyphonic notes, and apply the timing to each played note independently of any other notes.
The script must also be able to send a midi note off event for a given note at any time the relevant key is released. Even when that off event occurs anywhere in the timing sequence. Any notes that remain pressed, must continue to progress through their independent respective timing cycle.
The script should allow for polyphonic playing of the multiple synths, the same as if all those synths were one synth with polyphonic capability.
Could Mozaic be used to make such a script??
Is there any particular reason not to use all the knobs ... 10 channels?
I’m not sure I understand. A Note-On event doesn’t have a duration. Do you mean that no note-off is sent until the delay set by the second knob? In that case, when is a note-off sent if you set it to “full sustain”?
That part is easy. Whenever a note message comes in, it can be scheduled to be sent out after a delay (set by the first knob). It doesn’t matter how many notes there are at once. Mozaic takes care of it. It can be done in a few lines of code.
I don’t understand. I think you said above that the note duration is set by the second knob. But now you seem to be saying it’s set by holding down the key?
The script wouldn’t know or care what kind of synth or how many of them. It just sends notes out on the channels you tell it to and after the delays you set.
Could be, but the above questions about note-off timing would need to be cleared up.
It could be 10 channels.
I only suggested 5 to simplify what I thought might be a complex script.
Correct. No note-off event is sent until the delay setting by the second knob is fulfilled.... Unless the key that initiated that note is released before the end of the delay setting. Key release always has priority in ending the script event sequence.
That's great. That should mean the script would be less complex than I imagined.
The purpose of the script is to only preform it's sequence of delay events so long as the key(s) are pressed.
....In the same way an arpeggiator only plays its note sequence while the notes are held down.....
Except, the purpose of this script is not to apply timing to the playing of the pitch of notes like an arpeggiator.
The purpose of this script is to send MidiNoteON and MidiNoteOff key playing events to a synth on a specific midi channel, according to what the user programs the delay time to MidiNoteON, and the following delay time to MidiNoteOff.
BUT... The sequence is only active as long as each key that triggered a sequence remains held down. Once the key is released the script sends an instant MidiNoteOff for the note that was released.
Example: Imagine we load three synths into AUM. We set each synth to receive notes from a different midi channel.
i.e. channels 1, 2, and 3.
Midi channel 0, is reserved for sending played keyboard notes to the mosaic script.
While a keyboard note is held down...
• The mosaic script is then programed to send "that" played note immediately to synth 1.
• Synth 2 is programed to receive its note-on event 1/2 half a second after the key is pressed.
• Synth 3 is programed to receive its note-on event 3 seconds after the key is pressed.
But if the key is released before 3 seconds have passed... Synth 3 should not play.
The creative potential for the script comes from giving the user the ability to independently adjust the timing for when the midi-on and midi-off notes are received by each synth.
When a key is pressed....
• Synth 1 could begin playing the note immediately, but then send a note-off event 1 second later.
• Synth 2 could be programed to receive its note-on event 1/2 half a second after the key is pressed, but the user can adjust the ADSR envelope for Synth 2 so the attack for the sound it will play has a gentle rise time. This lets Synth 2 gradually build a layer with Synth 1 for a 1/2 second duration before Synth 1 turns off leaving synth one as the only synth sounding.
• Synth 3 could be programed to receive its note-on event 3 seconds after the key is pressed. Synth 3 also has its ADSR adjusted to provided a gentle rise time as its sound layers with the sound of Synth 2.
• Both Synth 2 and Synth 3 can be set to full sustain, and will play as long as the key is held down.
Maybe we could call this program something like a "Midi Channel Synthesizer Phase Mixer".
Multiple synths can be programed with this script to "Phase" on and off in any timing the user chooses.
This lets the user specifically program each synth in a way that will add a desired sound element to the Phase Mix.
Because Phase Mixer is time based, sounds that evolve over time can be constructed using individual synths as something akin to voices singing different parts in a choir, or instruments playing parts together in a symphony.
It's similar in some ways to wavetable, But the difference is that "Midi Channel Synthesizer Phase Mixing" allows multiple instruments to "overlap" or "layer" their sounds "gracefully" in extremely complementary ways.
There's a potential for creating sounds using "Midi Channel Synthesizer Phase Mixing" to have an immense scope of evolving sound elements.
The best part of "Midi Channel Synthesizer Phase Mixing", is it utilizes specific sound qualities from many different synths, and the varieties of sounds that can be created are virtually endless.
It's something like a form of modular synthesis that utilizes multiple Synthesizers to construct sounds that can evolve over time.
The reason we want the keyboard key release to override the control over the MidiNoteOff event, is because we want it to be playable. This adds to the versatility as single synth can be setup to play any short notes, but at times the notes are held down, an immense cascade of evolving sound can build up to create stunningly complex synthesizer Phase arrangements.
Maybe @brambos could build us a "Midi Channel Synthesizer Phase Mixing" AUv3 App, that could use internal AUM midi routing so it wont be necessary to assign midi receive channels in each individual synth.
The Synth Phase Mixing App could expose all 10 of its outputs so they can be quickly routed to synths using the routing grid in AUM. That way the same synth could be phased in and out of the mix more than one time if the user desired.
It would also be nice to have a made for the purpose GUI, that would include large knobs for fine timing adjustment, with sudo-LED to display the timings. An on/off button could be added for each timing column to free up the knobs for timing adjustments only.
I think it could be a marketable App.
Let’s say synth 1 was delayed by 0 seconds, synth 2 by 1 second, and synth 3 by three seconds. If a key is held down for two seconds, the first starts immediately, the second one second later, and the third three seconds later. But, when do the note-offs occur?
I suggest not to use note-delays but to implement an OnTimer that handles the note-on/note-off functionality. Otherwise in wim‘s last example synth 3 would start to play, even though the triggering key is already released.
I know thats a bit more complicated than using midi message delays, as the script needs to manage the note-on issue time and note-off time per incomming note. You also need to manage a list of changes, that then can be acted upon in the OnTimer. I would store SystemTime + xxx and in the OnTimer first compare noteState changes to perhaps send the note-off for active out-notes to stop them and then a second part comparing the issue time with the systemtime to see if there are notes to be started - but only if the triggering note is active. Or remove still to be sheduled notes when their trigger gets released.
There are probably several solutions to this
Although all this can be done inside iWavestation and KASPAR, it would certainly be cool to have this for any synth and sampler (finally allowing us to do Wavestation-like sequences with bs-16i)
I would agree with @horsetrainer that note-off messages should have immediate effect and note-ons shouldn't be sent if a note-off for the same note was received before.
An "OnTimer" solution sounds reasonable.
Is it possible to use mozaic to create a xyz pad?
The XY pad for MFX in Grooverider can be triggered with cc102,103 for xy and cc106 for turning it on and off.
So i wish to use Mozaic to have the exact same behaviour so i can record the xy pad midi in Xequence because currently you can only record this internally in GR16.
Would be great if this already exist or can be made, because that would be amazing.
The amount of time between each note-on and each note-off can be adjusted using the lower knob.
The note-offs occur when the amount of time set by the lower knob ends.
However, a key release will send an immediate note-off regardless of any timer settings.
Key Press---> Upper Knob time setting runs---> Note-On is sent---> Lower Knob time setting runs---> Note-Off is sent.
Here's an example of a setup...
We'll refer to....
"Upper Knob time setting" as "Timer 1"
"Lower Knob time setting" as "Timer 2"
"Any one individual keyboard key held down" as "Key Press"
• However, a key release will send an immediate note-off to all synths playing that particular note, regardless of any timer settings. (All other key notes held down remain under the control of the script)
• The script only preforms its Note-ON/OFF timing functions while any given key remains held down.
Mozaic has an xy pad built in.
@brambos is probably looking for his next product. I was hoping to see more uptake on
Mozaic for a few extra features but I can't believe it could compete with his sound engine bases products for fast pay back.
Maybe the boffins here can tackle "Midi Channel Synthesizer Phase Mixing" or explain what would be needed to create this capability in Mozaic. Maybe we don't even need anything new beyond a great script.
You mentioned KB-1 in another thread. What is it that it’s xy pads don’t do that you need?
@horsetrainer - These are the 5 optional GUI's a script can display. Flipping between them is also available for the scripter to use in the design:
@horsetrainer when you think of what's possible with Mozaic you need to think of events:
Press a key? MIDI Note On event = Mozaic scripts can create a list of additional events based on this single trigger
Lift the Key? another event MIDI Note Off = use it and add to the list of events to happen
The list of extra events are all managed by the Mozaic run-time interpreter/engine.
Mozaic provided internal timing tools to allow internal events around time:
So, the programmer uses events and these timing features to implement your "I'd like to have this happen".
What Mozaic doesn't do is "wait and then do this". It can just request more events that the run-time adds to it's list. It can keep a very precise and stable event-list with thousands of events. More events that any synth can respond it... so scripters can invent scripts that no synth can play even if Mozaic can output the MIDI stream.
How would you write a delay in Mozaic for one played key from Mini channel 0.
When the note is played the script will wait 1 second.
Then send out that note on Midi channel 1.
Let the note play for 8 seconds.
Then turn the note off?
The next question is...
How do you get Mozaic to send a MidiNoteOff to turn off the note turned on by the first part of the script, if you take your finger off that played key before the 8 seconds is up?
OK, that’s the key part. So yes, as @_ki mentioned, the script is a little more complicated because it can’t just fire off the note-off messages with the same delay as the note-on.
If synth 1 is at zero delay and synth 2 is at one second, but the note is only held for 0.5 second, does synth 2 not play at all?
Correct. synth 2 should not play at all.
Thanks for working on this wim!!!
I've written a script which allows me to use my Novation Peak as a physical midi controller for a Blofeld which I can post if anybody has that pair of physical synths and is interested in doing likewise (being able to control the Blofeld via a physical controller with loads of knobs and sliders is amazing, its like having an entirely new synth. I was planning on getting an Argon8, now there's no need whatsoever, the Blofeld is so deep and amazing, but with everything being buried within its menu's and not readily accessable I've never bothered with much in the past).
(The script maps the Peak's controls to almost all of the Blofeld's controllers listed in the controller section of the Blofeld manual; though its somehow possible to control much more than than just those listed in the Blofeld manual as there are more things controllable via the PatchBase Blofeld app for iPad, but as they are not documented I don't know how PatchBase is controlling these.)
Sorry ... I didn’t actually say I was going to work on it. I’ve just been trying to understand the spec. I might work on it, but I also am more motivated to work on things I might use myself. This isn’t something I’d use.
So ... if anyone else is interested in taking it up, go for it! Otherwise I will have to see if my curiosity or boredom levels cross the point that would talk me into doing it.
delay = 100
// SendMIDIOut 0, MIDIByte2, MIDIByte3 // remove comments and this is a MIDI Thru
SendMIDIOut 0, MIDIByte2, MIDIByte3, delay
For 1 second you use a Delay = 1000
SetTimerInterval 8000 StartTimer
// Won't work until you add more List management to play the right notes in order.
count = count + 1
SendMIDIOut 1 MIDIByte2, MIDIByte3 // remove comments and this is a MIDI Thru
My_note(count) = MIDIbyte(count)
// Synths treat velocity = 0 as a note off request
SendMIDIOut 1, My_note(count), 0
That's intuitively obvious and left to the interested reader to figure out and benefit from
solving a nice little puzzle. (That's TA for "Not sure but I could see it being possible. Please show me your work.")
That's all I live for. It's better than working to injure people. That's fun too.
Well, than thanks for thinking about it!
That at least gives me an idea of the complexity level involved.
I might tinker with Mozaic a little to see if I can at least get some crude levels of delay working... Just for the purpose learning and exploring proof of concept.
It's easier for me to focus on learning about programing when the use is something that's at the absolute center of my interest in iOS music.. Creative "Virtual Analog" Sound Design, Mixing, and Sound Layering.
I will study these script elements intently.
There might be errors. But that's also helpful. Debugging.
I started a whole thread to learn Mozaic:
They only send out x or y so you can only map 2 values.
To control the XY pad in Grooverider by midi you need to send cc 102,103 to move the xy pad position and also CC 106 for turning the effect on and off.
The Grooverider GR16 XY pad will turn on the effect on touch, and turn off on release. So thats a 3th cc value besides x and y.
The xy pad in KB1 or Brambos only maps 2.
Most XY pads send only two values. I think you will need to map another control that sends cc106 or write a mozaic script. When would the xy pad send cc106?