Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Time-stretching Siri with Cubasis

Using a fair bit of slicing and the pro time stretch algorithm in Cubasis, I’ve managed to align Siri reading a rhythmic poem over a musical setting. Also used the project to better understand Moog Model 15 programming to tweak the Sinfonietta Sintesi IAP patch bank. Not everyone’s cup of tea I realise but I don’t do easy.


Comments

  • Crazy, but crazy in a good way!

  • “Don’t do easy” is a gross understatement. I tried something similar but not nearly as complex or rhythmically accurate. So I well understand the effort that went into this. A really fun track to listen to. Love it.

  • And this is exactly why I stopped dropping acid....

  • Refreshing!

  • This is very very cool. All the chopping and nudging took how long?

  • This is fun!!
    I wonder what it would sound like with the voice re-pitched and autotuned :D

  • Weird & Wonderful! :sunglasses:

  • Thanks each for listening and commenting - much appreciated.

    @StudioES I processed Siri for Mariner Man in an afternoon but Tarantella took around 3 days off and on - a lot more words. There are 80 splits and 54 of those are time-stretched.

    The workflow is:

    • Type/copy the text into Pages
    • Enable Siri to speak screen (Accessibility -> Spoken Content)
    • Split words into syllables or soundalike words where necessary to match required meter
    • Adjust pronunciation either by substituting words or selecting alternative pronunciations in Spoken Content settings
    • Take a screen recording of Siri speaking the screen
    • Strip the audio out of the video - I used Lumafusion to save just audio
    • Import audio into Cubasis
    • Align, split and stretch as required.

    It took a similar length of time to transcribe the written music score into StaffPad, export MusicXML, import that into MuseScore (PC), export MIDI, import MIDI to Cubasis. Reason for exporting via MuseScore is that StaffPad MIDI export is rather limited.

    As @rs2000 suggests, it would be interesting to vary Siri’s pitch. I have thought about using Siri as the modulator source for a vocoder. The Façade pieces are intended to be spoken rather than sung but for general use I think the challenge with trying to get Siri to sing would be with sustained notes.

  • Awesome @AndyHoneybone! Thanks for posting your workflow.

    Reminds me of cutting n pasting in Sound Forge. May be fun to try again one weekend, I think.

    Looking forward to more of your work.

  • @AndyHoneybone The Waves Tune IAP is still 50% off, maybe worth a try?

Sign In or Register to comment.