Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Nanostudio 2 update

1232426282933

Comments

  • @Slam_Cut said:
    Re: IAA & AB support
    NS2 does not have support for either of these. Based on what Matt has posted and a bit of history, I am surmising the following:

    Matt was burned by changes in iOS that added a lot of extra NS1 app maintainance, which of course he isn’t paid for. He was really burned when iOS 9 ( I believe) broke AB functionality for NS1. On the NS1 forum we had a huge Wish List thread of features we wanted and iOS was changing from 32 bit to 64 bit in the future so he had to basically start all over from scratch and build a new 64 bit app. We all know Apple creates some pretty crap code, so IAA creates headaches right off the bat and based on past experience, changes in iOS will forces changes to AB code as well. For a HUGE app like NS2 (which is coded just by one person), one needs to simplify where one can.

    NS1 was concieved as an all-in-the-box scratchpad for ideas that would later be moved to a PC/Mac DAW to be completed. The users like NS1 too much and wanted Matt to make NS2 a more complete DAW. We don’t need no stinkin’ PC/Macs. Matt still prefers the all-in-the-box style of app because the user doesn’t have to mess around with getting other apps to work. NS2 is focused around efficient workflow, and doing everything within the app. He compromised with user requests to make NS1 AB compatible, but with the new app it looks like he was trying to avoid the extra headache and time-consuming changes he would potentially have to make to the app to maintain functionality with every stupid change Apple made to iOS.

    Using the PC/Mac model of VST/AU, Matt decided AUv3 would be the future of iOS platform as well. Because he has no control over iOS, nor the changes that othe devs make to their apps, this makes sense business-wise to keep Blip Interactive viable. We may not like it, but as we grouse about lack of features we should keep in mind that this is a guy who needs to feed his family. We should respect his decisions on choices like this. It also means that instead of time-consuming maintainance, he can focus on new features (iPhone, Audio Tracks, etc.).

    As I see it, all the funtionality of AB is still viable. Yes, there is an extra step or two to getting the Audio into NS2, but I don’t find that to be too difficult. I’ve been doing that with NS1 since AB compatibility was broken by iOS 9. People who are less patient with this may find another DAW preferable. If you have other opinions that are so strong you can’t adjust your AB workflow, I understand and do not question your right to do so.

    For me the UI/UX aspects of other DAW are more of a hindrince than this one feature being native to NS2. It’s faster to have AB work-arounds than use other DAW apps. For me. I know this is a personal workflow issue and I’m not slamming anyone who feels differently. I’m just sharing my opinions about what works for me in the hope that it will help someone else.

    PS: I may be totally wrong about the reasons Matt chose re: IAA/AB support. My recollections are fuzzy and I hope I haven’t implied something erroneous by mis-remembering his posts on the NS1 forum.

    Note: Devs are wildly divergent in how they implement the AUv3 standard. This is still a bit of the Wild West, and AUv3 cannot be expected to be perfect just yet. It will take some time for standardized standards to be implemented across the majority of apps so that the host app has no problems. The more we as iOS musician support this model, the better it will become. I hope. 😬

    Fair advice. Of course I want to support the developer. That’s why I vacuum most musical apps from AppStore.
    I’ll probably buy this one too. But if I’ll use it as much as cubasis and Audiobus, that I doubt because many of my favourite synths are not yet Auv3. And besides... I actually like the Audiobus environment. Changing the winning horse isn’t always preferable. I thought bm3 was going to be great. Actually I’ve never used it. Even after buying most of the samples. The learning curve is annoying. I like to stick with what works and make music.
    Lastly, kudos to the sheer talent of the developer of this monster app (ns2)

  • yeah, I don't see what the fuss is about. If you are stuck in the IAA paradigm, then you have lots of options. but the time has come to move on. I would prefer a streamlined app rather than one that is trying to do everything.

    I mean does any serious desktop synth exist only in standalone? I can't think of any, VST/AU have become standards because they work. Auv3 is the future of ios and I for one am thankful that IAA is becoming a thing of the past. As much as I appreciate audiobus, I honestly haven't touched it in at least half a year. It always seemed to me a workaround until a better solution came around - and with Auv3, it's here.

    I know everyone has their own usage situation, and personally NS2 seems to fit perfectly into my usage of ios. I prefer to route audio out to record and mix in ableton as I find mixing in ios too slow and the plugins aren't quite as fun as my desktop ones (getting there though!) so the lack of audio tracks isn't an issue for me.

    I don't have any illusions of NS2 being the be all end all, but I think it truly is a step in the right direction. and that synth!! holy shit, that looks amazing!

  • @Turntablist said:
    Which of course you have absolutely no proof of, so in fact you are seeing no pattern at all, the only pattern we are seeing is the never ending pattern of users here at this forum assuming they know the inside dealings of developer X Y or Z.

    Fair enough. I apologize for singling out Intua to make my point. I should have generalized and not pretended to know their development issues to make a point about NS2.

    A general example to illustrate my point:

    A 10.0.1 version of a product had some features users found exciting and wanted to see enhanced.

    After months and some "beta" release candidates a 10.0.5 version/update is released and features are dropped.

    Some users felt frustrated by the feature loss. I believe the features got dropped to produce a stable update but promises are made to add them back in.

    This is my contention that some apps just become unwieldy and difficult to stabilize as they continue to see enhancements.

    Sometimes it's wise to do what NanoStudio 2 is doing. Start over and stop carrying a large block of features that make progress so difficult. NOTE: Many features are forced onto Apps by Apple creating new standards. Support a Platform and you just have to cope with that as it makes sense for your product.

    That's my point and selecting a specific App for my 2 models was over-reaching:

    1. focus on stability, performance, code size, essential features
    2. keep an installed base happy and try to bolt on features which leads to excess development effort and delays

    It's a software development "argument": a set of reasons given with the aim of persuading others that an action or idea is right or wrong.

    It's a pattern in the software development industry that has seen these issues surface repeatedly.

    Pick your favorite developers and ask which approach they seem to favor. Thanks for the use of the soapbox.

  • I follow you @McDtracy - makes total sense. It’s a balancing act.

  • @brambos said:

    @alecsbuga said:
    They’re basically assuming any dev is an old dog accustomed to C and assembler code and how CPUs work, thus creating like a walled garden making it very hard for Swift developers to get into. I’m really pissed and frustrated about this.

    There's no denying that Apple's documentation is pure crap. In fact, I feel they should have stuck with Objective-C and not create Swift at all and invest all those resources in proper documentation and developer support instead. There is no reason for Swift to exist when they have a perfectly fine language already which does everything anyone could need (but that's my personal pet peeve).

    Regarding C: if you want to do realtime audio on iOS, C and C++ are the only way. Like really, THE ONLY WAY to make safe apps. Other languages, like Swift or ObjC are not realtime safe and will at some point lead to severe performance issues with your app for your users.

    Our kind host @Michael wrote an excellent blog post about it:

    http://atastypixel.com/blog/four-common-mistakes-in-audio-development

    It's really the best advice anyone could ever give if you're diving into mobile DSP stuff! :)

    I believe Rust is also an option.

  • edited November 2018

    @McDtracy said:
    Shortened for readability

    While i agree with your sentiment that some old technologies should be dropped in favour of moving forward, I really dont think it even needs a generalized over view of a non existant developer, that clearly hints at a particular developer who has nothing at all to do with the conversation, we are mostly adults here.

    Here is a better way to put it...
    Matt has been developing this app for six years, he does not see the value in supporting what he believes to be a dead technology, his previous release has to give him some credibility in understanding not only the market but also developing for the technologies in that market, again, six years, if he thinks it isn't a viable proposition to support IAA, then take him at his word, he has been working on it for six years, if it was viable, it would be included.

  • So, has NS2 been sent to Apple now for release?

  • edited November 2018

    @[Deleted User] said:
    For me to have to export audio into something else then import back into NS2 sampler is just too troublesome. Will wait for audio tracks to be added next summer 2019.

    @[Deleted User] said:
    So, has NS2 been sent to Apple now for release?

    You changed you mind ? :-)


    From BlipInteractive forums, posted today :
    http://forums.blipinteractive.co.uk/node/11997?page=2#comment-35450

    Gah, had a set back with Audio Units at the last moment (just .... don't ... ask). Pretty sure they're solid now.

    I'm desperately finishing off a couple of manual pages and then I'm pretty well there. Apple approval times are fast from what I've been hearing so I feel I'm still on for the planned launch of Dec 7th - might be a couple days late but nothing major.

    I haven't forgotten I offered to eBay one of my own legs if I didn't make it this year :)

  • @dendy said:

    @[Deleted User] said:
    For me to have to export audio into something else then import back into NS2 sampler is just too troublesome. Will wait for audio tracks to be added next summer 2019.

    @[Deleted User] said:
    So, has NS2 been sent to Apple now for release?

    You changed you mind ? :-)


    From BlipInteractive forums, posted today :
    http://forums.blipinteractive.co.uk/node/11997?page=2#comment-35450

    Gah, had a set back with Audio Units at the last moment (just .... don't ... ask). Pretty sure they're solid now.

    I'm desperately finishing off a couple of manual pages and then I'm pretty well there. Apple approval times are fast from what I've been hearing so I feel I'm still on for the planned launch of Dec 7th - might be a couple days late but nothing major.

    I haven't forgotten I offered to eBay one of my own legs if I didn't make it this year :)

    As mentioned before It sort of depends on if the 'sampler' can record in sync while the project is playing, hopefully meaning the audio can be recorded directly into the NS2 sampler avoiding a procedure of naming and exporting the recorded audio into a separate DAW and having to import the audio back into NS2. If not then I wait until June 2019 for audio tracks.

    Maybe you already know the answer?

  • Matt from Blip said he had a minor blip with au so it delayed him. He said he is still confident for a 12/07 release though. I'm paraphrasing from memory. Check the NanoStudio forums for the exact wording.

  • @kinkujin said:
    Matt from Blip said he had a minor blip with au so it delayed him. He said he is still confident for a 12/07 release though. I'm paraphrasing from memory. Check the NanoStudio forums for the exact wording.

    ...or look a couple of posts up 😁😁

  • C‘mon give us some videos and teasers...

  • @Trueyorky said:

    @kinkujin said:
    Matt from Blip said he had a minor blip with au so it delayed him. He said he is still confident for a 12/07 release though. I'm paraphrasing from memory. Check the NanoStudio forums for the exact wording.

    ...or look a couple of posts up 😁😁

    hahahahaha Didn't see that. HAHA!!

  • @kinkujin said:

    @Trueyorky said:

    @kinkujin said:
    Matt from Blip said he had a minor blip with au so it delayed him. He said he is still confident for a 12/07 release though. I'm paraphrasing from memory. Check the NanoStudio forums for the exact wording.

    ...or look a couple of posts up 😁😁

    hahahahaha Didn't see that. HAHA!!

    It's spooky this place, you get to hear the echoes before you even speak.... :)

  • @[Deleted User] said:

    @dendy said:

    @[Deleted User] said:
    For me to have to export audio into something else then import back into NS2 sampler is just too troublesome. Will wait for audio tracks to be added next summer 2019.

    @[Deleted User]

    As mentioned before It sort of depends on if the 'sampler' can record in sync while the project is playing, hopefully meaning the audio can be recorded directly into the NS2 sampler avoiding a procedure of naming and exporting the recorded audio into a separate DAW and having to import the audio back into NS2. If not then I wait until June 2019 for audio tracks.

    Maybe you already know the answer?

    If I understand you correctly, what you are talking about is ‘Resampling’ in Blip Speak (the official dialect of Synthese spoken in the Nanostudio World) [humorous intent for those that were unsure]
    For instance you record a synth part and want to use that riff as an audio sample. Solo that track, resample it, and use the sample either in Slate (drum pads) or Obsidian (synth). You don’t need to leave NS2 for this. Pad can have super long samples, so you can for example play an entire bassline for a song, resample that, then use it as an audio clip or feed it into Obsidian for mangling. The old TRG drum pad in NS1 could do this. Same for Eden synth, but Eden only had one sample. Obsidian has I think 24 per oscillator...? I may mis-remember that post. Been following this a long time.

    Anyway hope that helps. If I miss you point, please give me a short description of the exact workflow and maybe I can offer a suggestion. Also, if you use PS/Mac, the old version NS1 was made available for those platforms It was sort of a publicity thing to let people try NS1 for free. If you didn’t use NS1 you could try a few things on the PC version to get an idea of what was possible. I think the new version takes the old features up several notches.

  • @Slam_Cut said:

    @[Deleted User] said:

    @dendy said:

    @[Deleted User] said:
    For me to have to export audio into something else then import back into NS2 sampler is just too troublesome. Will wait for audio tracks to be added next summer 2019.

    @[Deleted User]

    As mentioned before It sort of depends on if the 'sampler' can record in sync while the project is playing, hopefully meaning the audio can be recorded directly into the NS2 sampler avoiding a procedure of naming and exporting the recorded audio into a separate DAW and having to import the audio back into NS2. If not then I wait until June 2019 for audio tracks.

    Maybe you already know the answer?


    If I understand you correctly, what you are talking about is ‘Resampling’ in Blip Speak (the official dialect of Synthese spoken in the Nanostudio World) [humorous intent for those that were unsure]
    For instance you record a synth part and want to use that riff as an audio sample. Solo that track, resample it, and use the sample either in Slate (drum pads) or Obsidian (synth). You don’t need to leave NS2 for this. Pad can have super long samples, so you can for example play an entire bassline for a song, resample that, then use it as an audio clip or feed it into Obsidian for mangling. The old TRG drum pad in NS1 could do this. Same for Eden synth, but Eden only had one sample. Obsidian has I think 24 per oscillator...? I may mis-remember that post. Been following this a long time.

    Anyway hope that helps. If I miss you point, please give me a short description of the exact workflow and maybe I can offer a suggestion. Also, if you use PS/Mac, the old version NS1 was made available for those platforms It was sort of a publicity thing to let people try NS1 for free. If you didn’t use NS1 you could try a few things on the PC version to get an idea of what was possible. I think the new version takes the old features up several notches.

    In short the 'synced sample recording' could be used as a temporary substitute for the missing audio-tracks.

    So for example if there is the need to record vocals to a backing track one could start the playback and record the vocals in the sampler where the start/end of the sampling could aligned to bars/beats.

    This would make it very easy to align/trigger the recorded samples on the time-line.

  • I can't wait to finally make music again! It's not me, it's my AU...

    Joking aside, pretty cool that he's so close to releasing. Recently looked at an old ipad and remembered NS was still on it. Played with it for a bit. It was nice to think back on when I first found NS/SunVox/Caustic and realized I might not need a computer to make music. Having it on my phone and giving me something to jam with on long bus rides was great. We have come a long way.

  • edited November 2018

    @Turntablist said:

    While i agree with your sentiment that some old technologies should be dropped

    OK. You agree with my sentiment. I apologized for using a product to make the point. I'm still trying to re-frame it as an idea.

    Here is a better way to put it...

    Cool. Active listening... let me see if I'm getting through.

    Matt has been developing this app for six years, he does not see the value in supporting what he believes to be a dead technology, his previous release has to give him some credibility in understanding not only the market but also developing for the technologies in that market, again, six years, if he thinks it isn't a viable proposition to support IAA, then take him at his word, he has been working on it for six years, if it was viable, it would be included.

    Sorry. I failed to make my point.

    Actually Matt and the specifics of his app are not central to the point I'm making. Perhaps a new thread is a better idea to persuade forum members that maybe IAA's usefulness has had it's run. I need to start looking at newer AU's to see if they have been dropping IAA API's as well. Of course, I look for IAA only when a new App purchase doesn't show in the AU. AU's make IAA less useful in most cases. There might be MIDI cases where IAA MIDI is still critical. I'd appreciate knowing more about that.

    [Notice how I'm shifting back to discussing ideas.]

    The idea of striving for stability over user's feature demands apply in many situations:

    Korg not implementing AU's. Korg's stability is legendary on IOS. Bolting on AU's could make the code impossible to support without changing the features users rely on. Their rationale is not clear but wanting to use Korg Synth's in NS2 will be an issue.

    I wonder if a programmer can make an AU that loads 'one-and-only-one' IAA Synth?

    Smaller developers being begged to add AU support and finding it to be extremely difficult when they would really like to add more music related features to their apps. @alexbuga and Samplist is in that hell right now. He got it to work in a few Hosts but testing showed he wasn't done yet. I hope he gets out of AU-jail soon because there are musical features he might rather be coding.

    So everytime I see a user say "Can this App add AU?" I think "Be careful what you ask for." Massive changes to code bases are a type of death march. Matt has made some promises and reality has been testing his resolve to reach "Bataan".

    Maybe there are more examples worth mentioning or best case developers that are considering changing their approach to the legacy standards.

    If AudioKit makes AU easier to implement with rock solid code it
    might help the solo dev's deal with the complexity of "making users" happier with App interoperability.

    If the number of words I have used offend you just scroll and skip. TL;DR.

    Sometimes it takes a lot of words to clearly make a point.

  • @Samu said:
    In short the 'synced sample recording' could be used as a temporary substitute for the missing audio-tracks.

    So for example if there is the need to record vocals to a backing track one could start the playback and record the vocals in the sampler where the start/end of the sampling could aligned to bars/beats.

    This would make it very easy to align/trigger the recorded samples on the time-line.

    Precisely. This functionality existed in NS1, and will be what I use until Audio Tracks come this summer, but I will likely keep using the Slate pads for many smaller Audio Clips to keep the projects visually streamlined. A bunch of Audio Tracks can take up a lot of vertical space on mobile devices, if for instance you just have a lot of small chunks Audio that need to be dropped in.

  • @brambos said:

    @McDtracy said:
    My money is always on developer's that choose supportability as a top priority. They endure in the market.

    ^ What this guy said ;)

    To support my point: the @brambos balance of stability and features in his Apps speak volumes.

  • edited November 2018

    @Slam_Cut said:

    @[Deleted User] said:

    @dendy said:

    @[Deleted User] said:
    For me to have to export audio into something else then import back into NS2 sampler is just too troublesome. Will wait for audio tracks to be added next summer 2019.

    @[Deleted User]

    As mentioned before It sort of depends on if the 'sampler' can record in sync while the project is playing, hopefully meaning the audio can be recorded directly into the NS2 sampler avoiding a procedure of naming and exporting the recorded audio into a separate DAW and having to import the audio back into NS2. If not then I wait until June 2019 for audio tracks.

    Maybe you already know the answer?


    If I understand you correctly, what you are talking about is ‘Resampling’ in Blip Speak (the official dialect of Synthese spoken in the Nanostudio World) [humorous intent for those that were unsure]
    For instance you record a synth part and want to use that riff as an audio sample. Solo that track, resample it, and use the sample either in Slate (drum pads) or Obsidian (synth). You don’t need to leave NS2 for this. Pad can have super long samples, so you can for example play an entire bassline for a song, resample that, then use it as an audio clip or feed it into Obsidian for mangling. The old TRG drum pad in NS1 could do this. Same for Eden synth, but Eden only had one sample. Obsidian has I think 24 per oscillator...? I may mis-remember that post. Been following this a long time.

    Anyway hope that helps. If I miss you point, please give me a short description of the exact workflow and maybe I can offer a suggestion. Also, if you use PS/Mac, the old version NS1 was made available for those platforms It was sort of a publicity thing to let people try NS1 for free. If you didn’t use NS1 you could try a few things on the PC version to get an idea of what was possible. I think the new version takes the old features up several notches.

    No that pretty much it, resampling a track and have it play back in the sampler in sync without leaving NS2 is what I was asking. Cheers

  • @Slam_Cut said:

    @[Deleted User] said:

    @dendy said:

    @[Deleted User] said:
    For me to have to export audio into something else then import back into NS2 sampler is just too troublesome. Will wait for audio tracks to be added next summer 2019.

    @[Deleted User]

    As mentioned before It sort of depends on if the 'sampler' can record in sync while the project is playing, hopefully meaning the audio can be recorded directly into the NS2 sampler avoiding a procedure of naming and exporting the recorded audio into a separate DAW and having to import the audio back into NS2. If not then I wait until June 2019 for audio tracks.

    Maybe you already know the answer?


    If I understand you correctly, what you are talking about is ‘Resampling’ in Blip Speak (the official dialect of Synthese spoken in the Nanostudio World) [humorous intent for those that were unsure]
    For instance you record a synth part and want to use that riff as an audio sample. Solo that track, resample it, and use the sample either in Slate (drum pads) or Obsidian (synth). You don’t need to leave NS2 for this. Pad can have super long samples, so you can for example play an entire bassline for a song, resample that, then use it as an audio clip or feed it into Obsidian for mangling.

    The big question for me is, can you select a tweakable range within a 'super long sample' or do you have to crop it?

  • I’m not sure about this, but I get the impression that’s what the Spectral Synthesis (I’m probably mis-remembering that name) does. I’m still a bit uncertain of that feature, but from what I’ve read, I think it might do what you are asking. Maybe. 😬

  • @McDtracy said:

    @Turntablist said:

    While i agree with your sentiment that some old technologies should be dropped

    OK. You agree with my sentiment. I apologized for using a product to make the point. I'm still trying to re-frame it as an idea.

    Here is a better way to put it...

    Cool. Active listening... let me see if I'm getting through.

    Matt has been developing this app for six years, he does not see the value in supporting what he believes to be a dead technology, his previous release has to give him some credibility in understanding not only the market but also developing for the technologies in that market, again, six years, if he thinks it isn't a viable proposition to support IAA, then take him at his word, he has been working on it for six years, if it was viable, it would be included.

    Sorry. I failed to make my point.

    Actually Matt and the specifics of his app are not central to the point I'm making. Perhaps a new thread is a better idea to persuade forum members that maybe IAA's usefulness has had it's run. I need to start looking at newer AU's to see if they have been dropping IAA API's as well. Of course, I look for IAA only when a new App purchase doesn't show in the AU. AU's make IAA less useful in most cases. There might be MIDI cases where IAA MIDI is still critical. I'd appreciate knowing more about that.

    [Notice how I'm shifting back to discussing ideas.]

    The idea of striving for stability over user's feature demands apply in many situations:

    Korg not implementing AU's. Korg's stability is legendary on IOS. Bolting on AU's could make the code impossible to support without changing the features users rely on. Their rationale is not clear but wanting to use Korg Synth's in NS2 will be an issue.

    I wonder if a programmer can make an AU that loads 'one-and-only-one' IAA Synth?

    Smaller developers being begged to add AU support and finding it to be extremely difficult when they would really like to add more music related features to their apps. @alexbuga and Samplist is in that hell right now. He got it to work in a few Hosts but testing showed he wasn't done yet. I hope he gets out of AU-jail soon because there are musical features he might rather be coding.

    So everytime I see a user say "Can this App add AU?" I think "Be careful what you ask for." Massive changes to code bases are a type of death march. Matt has made some promises and reality has been testing his resolve to reach "Bataan".

    Maybe there are more examples worth mentioning or best case developers that are considering changing their approach to the legacy standards.

    If AudioKit makes AU easier to implement with rock solid code it
    might help the solo dev's deal with the complexity of "making users" happier with App interoperability.

    If the number of words I have used offend you just scroll and skip. TL;DR.

    Sometimes it takes a lot of words to clearly make a point.

    Oh sorry, i thought your point was to describe why out of date technologies should be dropped moving forward, from a developers point of view, but you seem to be stuck in some kind of pointless loop where you keep making the same point over and over, but with different words terms and sentences, you have, I think, made your point.

  • @Turntablist said:
    but you seem to be stuck in some kind of pointless loop

    exit(-1)

  • The silence is killing me ... it's deafening....

  • edited November 2018

    @Slam_Cut said:
    I’m not sure about this, but I get the impression that’s what the Spectral Synthesis (I’m probably mis-remembering that name) does. I’m still a bit uncertain of that feature, but from what I’ve read, I think it might do what you are asking. Maybe. 😬

    No I meant, it says you can use sound files up to an hour long. What if you want to just use ten seconds of this sound file? Are you forced to crop it or can you just highlight ten seconds and still keep the rest intact, able to readjust sliders to select a different range if you decide to. Can you automate the start and end, that sort of thing. Anyway, will find out soon enough.

  • @AudioGus said:

    @Slam_Cut said:
    I’m not sure about this, but I get the impression that’s what the Spectral Synthesis (I’m probably mis-remembering that name) does. I’m still a bit uncertain of that feature, but from what I’ve read, I think it might do what you are asking. Maybe. 😬

    No I meant, it says you can use sound files up to an hour long. What if you want to just use ten seconds of this sound file? Are you forced to crop it or can you just highlight ten seconds and still keep the rest intact, able to readjust sliders to select a different range if you decide to. Can you automate the start and end, that sort of thing. Anyway, will find out soon enough.

    I'm only guessing, but setting start/end points is the most basic sampler feature. I would assume that's included for sure in such a complex app based around sampler. Not sure about the automating, but we can hope :)

  • @recccp said:

    @AudioGus said:

    @Slam_Cut said:
    I’m not sure about this, but I get the impression that’s what the Spectral Synthesis (I’m probably mis-remembering that name) does. I’m still a bit uncertain of that feature, but from what I’ve read, I think it might do what you are asking. Maybe. 😬

    No I meant, it says you can use sound files up to an hour long. What if you want to just use ten seconds of this sound file? Are you forced to crop it or can you just highlight ten seconds and still keep the rest intact, able to readjust sliders to select a different range if you decide to. Can you automate the start and end, that sort of thing. Anyway, will find out soon enough.

    I'm only guessing, but setting start/end points is the most basic sampler feature. I would assume that's included for sure in such a complex app based around sampler. Not sure about the automating, but we can hope :)

    Only reason I ask is that the shots of the sampler I saw had an 'audio editor' button on them as opposed to a waveform... which made me wonder if it is destructive editing... ...probably not... ... ... .

  • @AudioGus said:

    @Slam_Cut said:
    I’m not sure about this, but I get the impression that’s what the Spectral Synthesis (I’m probably mis-remembering that name) does. I’m still a bit uncertain of that feature, but from what I’ve read, I think it might do what you are asking. Maybe. 😬

    No I meant, it says you can use sound files up to an hour long. What if you want to just use ten seconds of this sound file? Are you forced to crop it or can you just highlight ten seconds and still keep the rest intact, able to readjust sliders to select a different range if you decide to. Can you automate the start and end, that sort of thing. Anyway, will find out soon enough.

    I’m not sure. I don’t think this question has come up before. I think that in NS1 we could save ‘edited’ parts and not overwrite the original sample (i.e. non-destructive). But there was no on-the-fly accessing just a part/looped section of a longer sample. I don’t understand the purpose of this. Sounds interesting. I think that NS2 will come with more re-sampling ability and I’d expect it to be somewhat easy to take portions of audio to use for the synth or drum pad. Yeah. Probably best to wait and see, but the speculation on the features is fun.

This discussion has been closed.