Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

DEVS, can someone please make a PaulStretch AU.

I know this topic has been brought up here before, but figured id try again. I know there are some apps that achieve similar results, but they are not the same. Im pretty sure PaulStretch code is open source, so if someone knows what they are doing, they could possibly port it??

Im willing to pay premium for this, as im sure others are as well.

would you pay premium for a port of PaulStretch?
  1. Would You Pay?66 votes
    1. Yes, Premium.
      37.88%
    2. Maybe a Few Bucks
      53.03%
    3. no, I hate ambient stretched out stuff.
        9.09%
«134567

Comments

  • Paulstretch is missing! Word!
    Would love to see this on iOS. Especially with the latest iPads with huge processing power (pro 2018, 2020 and of course 2021)

    What is premium?
    I am not sure though if you can use open source codes and sell stuff you made with it...

  • @david_2017

    Premium in iOS is 10-20?? i guess? not sure actually haha.

    Def would love iOS version of stretch. Would be really cool to see a slightly modernized version with LFO's or a sequencer like Gauss etc..

  • The GNU-GPL terms are such that apps that use GPL code can’t legally be distributed in the App Store as the App Store license is in conflict with the GPL license terms.

  • @david_2017 said:
    Paulstretch is missing! Word!
    Would love to see this on iOS. Especially with the latest iPads with huge processing power (pro 2018, 2020 and of course 2021)

    What is premium?
    I am not sure though if you can use open source codes and sell stuff you made with it...

    Depends on the license. You can do anything you like commercially for something like a MIT license. Other open source licenses limit what can be done.

  • Source code being open-source does not always mean you can use it in your commercial products

  • edited April 2021

    @espiegel123 - That explains why it isn’t here, I guess. I wonder if anyone could reverse engineer something? AudioStretch though, comes pretty close, stretching out over minutes not hours, and designed for another purpose, so lacking the filters etc, but I still find it a pretty cool tool for that ambient stretched out sh*t! and/or ripping the audio from screen recorded vids of interesting apps lacking connectivity :)

  • @wim said:

    @david_2017 said:
    Paulstretch is missing! Word!
    Would love to see this on iOS. Especially with the latest iPads with huge processing power (pro 2018, 2020 and of course 2021)

    What is premium?
    I am not sure though if you can use open source codes and sell stuff you made with it...

    Depends on the license. You can do anything you like commercially for something like a MIT license. Other open source licenses limit what can be done.

    True, I’d love to see Surge on iOS but Some sh*t with the AppStore licensing model prevents that from happening unless Apple changes their mind...
    https://surge-synthesizer.github.io/

  • wimwim
    edited April 2021

    Paul stretch is released under the GPL 2.0 license, which means it can’t be used in Products on the App Store.

  • I’m sure I came across a PaulStretch implementation in Python a while back.

  • There’s a command line python version. I wonder if it could work in Python or Pythonista. (Not enough to bother trying myself though.)

  • @wim said:
    There’s a command line python version. I wonder if it could work in Python or Pythonista. (Not enough to bother trying myself though.)

    It looks to me like Paulstretch Python version depends on NumPy and SciPy, but SciPy isn't supported by Pythonista, can't install it.

  • I cannot vouch for it but pyto https://apps.apple.com/gb/app/pyto-python-3/id1436650069 supports those libraries. With more effort still, I believe iSH https://apps.apple.com/gb/app/ish-shell/id1436902243 will open up the options in this area for “free”. There is a lot of untapped potential in there.

  • Thanks for the input. I didn’t know the whole license and apple stuff. Pity and bummer as surge and Paulstretch is really cool

    I bought auditor for the stretch feature but it’s not quite pauly... I am interested to see where this thread is going - especially the python part ;)

  • GPL and App Store definitely have some, umm, issues working together.
    Someone could do a port and then release their source code under GPL. But then anyone who wanted to use it would need to compile it for themselves. The dev couldn't release it on the App Store.

    I took a look at the Git source. It's an FLTK based program that is pretty old. It would take nearly a complete rewrite and not a simple port.

    The dev hasn't just placed the source under GPL. They have been much cooler than that. The Python source and the algorithm have been placed into the public domain. So, it would be possible to re-implement the algorithm as an iOS app or as an AU. I hope that anyone that did this would return the favor to the original dev and make their implementation free and point back to the dev's web page where there is a donation button.

    Isn't the original program meant to be used for off-line processing? If someone were to make an AU of this, I would assume that it would have to be a generator sort of thing. How would someone use it in Audiobus for example? From what I've read, it doesn't seem like it would work as an effect on live audio.

  • edited April 2021

    Koala can stretch things pretty far if you keep bouncing and readjusting clip length but yeah would be nice for paulstretch.

  • Yep, I’d pay an iOS version of big bucks for this, with no hesitation whatsoever.

  • The algorithm isn’t gol and is pretty easy to implement tbh.

    Not sure if makes sense as an auv3 though as it’s more an offline render type deal.

  • @Svetlovska said:
    @espiegel123 - That explains why it isn’t here, I guess. I wonder if anyone could reverse engineer something? AudioStretch though, comes pretty close, stretching out over minutes not hours, and designed for another purpose, so lacking the filters etc, but I still find it a pretty cool tool for that ambient stretched out sh*t! and/or ripping the audio from screen recorded vids of interesting apps lacking connectivity :)

    I feel like this dev is the most likely candidate for implementing something similar into an app that already exists : https://apps.apple.com/us/app/hokusai-audio-editor/id432079746

    No insider knowledge or anything just seems like it fits what they do well to me with their current offerings in the App Store.

  • @Obo said:

    @Svetlovska said:
    @espiegel123 - That explains why it isn’t here, I guess. I wonder if anyone could reverse engineer something? AudioStretch though, comes pretty close, stretching out over minutes not hours, and designed for another purpose, so lacking the filters etc, but I still find it a pretty cool tool for that ambient stretched out sh*t! and/or ripping the audio from screen recorded vids of interesting apps lacking connectivity :)

    I feel like this dev is the most likely candidate for implementing something similar into an app that already exists : https://apps.apple.com/us/app/hokusai-audio-editor/id432079746

    No insider knowledge or anything just seems to me like it fits well with their current offerings in the App Store and where they seem to excel.

  • wow. lots of great info in this thread, thanks for all the feedback!!

    @NeonSilicon I use Paulstretch in ableton as a VST. I believe you can use it in standalone as well. I prefer to use it as VST because of automation, modulation, external FX etc.

  • Maybe SoundFruuze can fill in some of the gaps sonically. But yes, would be amazing if something similar would come out at some point. Perhaps like a SlowMachine but granular stretching and automation. Or Gauss but all about stretching. Or, going even further, some sort of Glitchcore meet Spacecraft. I'm dreaming big here, haha

  • Have you tried Loopy? Record something at a highish bpm and drop it down to 20. It's a pretty convincing paulstretch.

  • Found a Loopy experiment I posted in a very similar paulstretch thread a few years back.

    @syrupcore said:

    Had to try it. Last 12 seconds of U smile, recorded at 400bpm in Loopy HD from my laptop speaker into the iPhone mic (I'm hi-fi like that). Slowed it down to 50bpm (=800% slower), loaded into AUM with a LPF + Push + Dub + Space and... not bad for an app designed for looping and a decidedly lo-fi experiment, eh? Definitely not nearly as 'clean' as the paul stretch algorithm but it works. The digital distortion is me working quickly, not loopy. The LPF is in there to roll off some of the stretching artifacts.

  • I think, the essence of stretching is: A recorded piece of audio is an ever evolving sequence of waveforms. One complete wave has 3 zerocrossings. The first and 3th crossing defines the character and frequency of the waveform. If you repeat that waveform, you have a tone with that character and frequency.
    If you go very slowly through the recorded audio, its pitching down, because the time of each individual wave is longer.
    The trick is, i think, to measure when a waveform goes to a zero crossing, look ahead when a waveform goes through a zero crossing and end up at a zero crossing. That is a complete wave-cycle. Repeat that cycle (a tone), until you measure the start of the next complete wave-cycle of the recording, and so on.
    The pitfall: Depending on how fast you scan though the recording, you can't reproduce multiple waveforms if the scan speed isn't a division of the actual recorded speed by 2.
    My head is going haywire...

  • What could work here is to install Xcode and download the sources codes and recompile and then 'side-load' the apps.
    But that in turn requires that someone takes the time to 'package'(ie. check that they compile and that app works) the sources and needed files for distribution.

    This would be one way to work around the AppStore restrictions...

    TwistedWave already has a Zynaptiq/ZTX time stretch that can do some extreme stretching if needed.

  • @syrupcore said:
    Found a Loopy experiment I posted in a very similar paulstretch thread a few years back.

    @syrupcore said:

    Had to try it. Last 12 seconds of U smile, recorded at 400bpm in Loopy HD from my laptop speaker into the iPhone mic (I'm hi-fi like that). Slowed it down to 50bpm (=800% slower), loaded into AUM with a LPF + Push + Dub + Space and... not bad for an app designed for looping and a decidedly lo-fi experiment, eh? Definitely not nearly as 'clean' as the paul stretch algorithm but it works. The digital distortion is me working quickly, not loopy. The LPF is in there to roll off some of the stretching artifacts.

    >

    Wow this sounds awesome! Great idea! Will try this when I am back home again!

  • I'm getting geeky here, and maybe useful information for coding (i'm no coder, but i know a bit of digital information).
    A sample is an array of numbers. In 8-bit, these numbers vary from 0 to 255. A zero crossing of a waveform digitally, is 255-128. Let's call that 0.
    The start of a complete wave cycle = 0 or 0-x, where the next digit number = 0+x.
    Lets call that condition A. It correspondents to an Address number in the Array of the complete recording.
    A complete wave cycle = condition A to the next condition A. That can be stored in a new array, lets say Array C (cycle), with corresponding Adress number.
    Repeating Array C results in a tone with a character, stored in the data of that Array. The frequency of the Tone is determined how fast you repeat array C.
    If you want the pitch of that tone to be the same of the pitch of the sample, you need to read out Array C with the sample frequency of the recording.
    Now lets speed down the sample frequency of the recording. When it meets Condition A, on Adress x, It must repeat Array C at that same Address with the original sample speed until it met a new Condition A. When Array C has not finished it sequence when a new Condition A point arrives, It has to finish it, and then Start Array C of the new Condition A at that adress.

    Actually, you need 2 data-files. The original recording as a reference, and an image of the recording to scan for Conditions.

  • @Identor : no idea what any of that means. But the app based on it definitely needs to be called ‘Condition A’. :)

  • Consider this my best BenStretch
    https://patchstorage.com/glacial-sampler-time-stretch-texture-sampler/

    Drop the Time Stretch LFO speed even lower

  • @bcrichards said:
    Consider this my best BenStretch
    https://patchstorage.com/glacial-sampler-time-stretch-texture-sampler/

    Drop the Time Stretch LFO speed even lower

    Yep, your approach goes into the territory of granular synthesis, witch keep the character of the original sound. Granular is making "grains" of a piece of the sample, witch hold the character of the waveform. Good granular synthesis is incorporated with a smooth levelling of the grain (volume up/down) from sinus-way form to block-way form (smooth-harsh).
    My approach is not granular, but more iterative (repetition of a single waveform until another condition)

Sign In or Register to comment.