Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Question to developers: Is Audio to Physical Modeling possible?

Hello all!

There is various boxes from EHX and also Jamorigins MIDI Guitar 2 where audio is transformed to MIDI (no clue what EHX is doing).

I am wondering if it would be possible to have physical modeled instruments with the realism like SWAM and Pianoteq but unlike MIDI Guitar 2 go from Audio directly to the physical modeling engine?

This way people could hum and sing realistic violin/flute solos with all the dynamics of the voice and people playing electric/acoustic instruments could transfer all the nuances of their playing that is easily missed via midi/samples.

Is it possible?

All the best!

«1

Comments

  • Midi is just the interface between pitch detection and the actual instrument.
    You can send anything over it, be it pitch, volume, harmonic content translated to a number of MIDI controllers (like in SWAM) to control the sound.
    If you want to play SWAM instruments with your voice then you need software that generates MIDI data appropriate for controlling SWAM, for example.
    MidiGuitar 2 already does the pitch, velocity, bend and aftertouch detection and conversion and it's up to you what you control with that, be it a classic synth or a physical modeling engine.

  • From the little I've read about the technology used by EHX, I don't think any of the 9 series mellotron/synth/keys pedals use midi. I seem to remember reading that they were designed by David Cockerell from EMS who designed the VCS3. They seem to work in similar way to the HRM remodeling of the early Roland VG units, which uses the actual sound of the guitar strings. I have two of the VG8ex's along with a couple of the EHX 9 series pedals. The only audio to physical modeling which I've used on iOS is via the Objeq app, which is truly great but also limited in the type of sounds it can produce. Nevertheless, coupled with suitable impulse responses in a convolution reverb, the app works wonders on an audio input.

  • edited January 2020

    @rcf said:
    From the little I've read about the technology used by EHX, I don't think any of the 9 series mellotron/synth/keys pedals use midi. I seem to remember reading that they were designed by David Cockerell from EMS who designed the VCS3. They seem to work in similar way to the HRM remodeling of the early Roland VG units, which uses the actual sound of the guitar strings. I have two of the VG8ex's along with a couple of the EHX 9 series pedals. The only audio to physical modeling which I've used on iOS is via the Objeq app, which is truly great but also limited in the type of sounds it can produce. Nevertheless, coupled with suitable impulse responses in a convolution reverb, the app works wonders on an audio input.

    Thank you very much! This is what I was wondering was possible, the actual instrument or voice transformed in real-time to a physical modeling engine so every nuance of the playing/singing is picked up instead of midi/samples.

    Yes Objeq is really great :) I'd love to see something similar but for instruments!

    If it is possible to create I hope someone does it with Pianoteq/SWAM quality.

  • This might be of interest to you. Not exactly what you asked for and not cheap!

    https://www.krotosaudio.com/reformer/

    I expect we will have this type of thing on iOS at some stage.

  • SWAM has instruments on iOS. Finger Fiddle is an impressive modeled violin/viola/cello/double bass. It still works well though it seems to no longer being actively developed.

  • @espiegel123 said:
    SWAM has instruments on iOS. Finger Fiddle is an impressive modeled violin/viola/cello/double bass. It still works well though it seems to no longer being actively developed.

    Funny that you mentioned Finger Fiddle! I made a track yesterday with it and it was exactly Finger Fiddle that got me thinking: ”why can't I just plugin my guitar and get the same dynamics as in this app?” ;)

    Audio to Physical Modeling.

  • @BlueGreenSpiral said:
    This might be of interest to you. Not exactly what you asked for and not cheap!

    https://www.krotosaudio.com/reformer/

    I expect we will have this type of thing on iOS at some stage.

    Great that this technology exists, yeah in 5-7 years we might just have what I'm looking for now :)

  • @Treaszure said:

    @espiegel123 said:
    SWAM has instruments on iOS. Finger Fiddle is an impressive modeled violin/viola/cello/double bass. It still works well though it seems to no longer being actively developed.

    Funny that you mentioned Finger Fiddle! I made a track yesterday with it and it was exactly Finger Fiddle that got me thinking: ”why can't I just plugin my guitar and get the same dynamics as in this app?” ;)

    Audio to Physical Modeling.

    That isn’t a physical modeling issue. Analyzing the pitch and dynamics of the guitar and translating that into meaningful data is something else altogether.

  • wimwim
    edited January 2020

    That sounds like an insanely complicated math problem with an infinite number of inputs and outputs. Guitar > violin? Violin > Guitar? Clarinet > Guitar? Clarinet > Tuba? Tuba > Guitar? Each and every combination with hugely complicated and specific arithmetic.

    I can’t imagine it.

    Extracting the pitch, dynamics, and expression of one instrument into MIDI notes and ccs, then driving emulations of other instruments I can see though, and some apps do already have it.

    On the other hand, Yonac Voxsyn does actually synthesize new tones based on the tones fed to it. That is a far cry from physical modeling though.

    There are those vocal apps that can model different singing voices out of voice fed into them. So maybe if you stuck to just voice input and changed that into other physically modeled instruments ... but some sort of generic converter of audio format between different instruments - I just can’t see it.

  • edited January 2020

    So can't I...
    A piano is fairly simple to model because it has a precise mechanic with 100% reproducable performance parameters.
    A plugged or bowed string instrument adds hundeds of touch variations that are 'modulated' in strength and direction, often within the blink of an eye with both exciting and damping hand.
    This is what makes a player's 'tone' and is done by 'muscle memory'.
    Aside from the complexity of translation into parameters the response also depends on the individual instrument's construction.

    You don't have to be a sophisticated player to take advantage of all these variations - it's the fun in playing a 'real' instrument.

  • edited January 2020
    The user and all related content has been deleted.
  • edited January 2020
    The user and all related content has been deleted.
  • Boss SY-1000?

  • Yes, I know well about the complexity of 'piano modelling'.
    But that wasn't the point at all: the action of the piano is defined and predictable, while picking a guitar is not... at least not in a similiarly easy to describe way. ;)

  • This is a trivial problem that can be solved by a smart high school kid on his phone... in 2050.
    Until then... it's beyond our current technology.

    NOTE: I will be dead by 2050... so sue me.

  • Actually it's not a problem...
    A synthesis engine capable to generate all the nuances of a human player needs a similiar amount of learning experience as the instrument itself.
    Years ago (with the release of the Garritan Stradivarius library iirc) someone programmed a classic piece that was (practically) indistinguishable from records of the original as a demo.
    But this 'someone' was an experienced violin player and it took a real big effort to place all the articulations properly.

    Pianoteq's solution is a great approximation, but won't sound like a piano/grand in a room, as @Max23 mentioned. For decades people have been praising sampled pianos without sympathetic string resonance (an essential part of live piano sound), and only recently this feature became more common.
    (the majority of sampled instruments still comes without or with a crippled implementation)

    That doesn't mean fake instruments are a nogo, but why not accept the limitations while enjoying human playin ? It's fun for both performer and audience... hopefully ;)

  • edited January 2020
    The user and all related content has been deleted.
  • @Max23 said:
    if u want realistic blown instruments use wind controller (its a mouthpiece u blow into)
    if you want realistic violins use MPE keyboard, roli or something
    audio to something isnt able to capture all the expression ...

    For truly realistic strings, you need something with which to convey bowing information (including where the bow is touching).

  • The user and all related content has been deleted.
  • Bow parameters:
    hit point on string (distance from bridge)
    part of the bow doing the hit
    direction of movement
    speed, pressure (changing and driven by feedback the player receives from the instrument)

    @Max23 the resonance of piano strings (when keys are pressed) is an intense calculation, but not a fundamental problem. GEM Realpianos did that (in humble form) in early 2k years.
    I assume things have been improved since then...

  • @Telefunky said:
    Bow parameters:
    hit point on string (distance from bridge)
    part of the bow doing the hit
    direction of movement
    speed, pressure (changing and driven by feedback the player receives from the instrument)

    Which string played is vital as well. The same note played on a high position of the low G string is going to sound radically different than the same note played on the A or E string. Tension, winding vs. no winding, gut vs. steel, and distance from the finger to the bridge have a huge effect on tone. There's also rate of change from up to down, suddenness of pressure changes, etc. etc. etc.

    Sure you could get closer than a lot of people would notice though.

  • @wim said:

    @Telefunky said:
    Bow parameters:
    hit point on string (distance from bridge)
    part of the bow doing the hit
    direction of movement
    speed, pressure (changing and driven by feedback the player receives from the instrument)

    Which string played is vital as well. The same note played on a high position of the low G string is going to sound radically different than the same note played on the A or E string. Tension, winding vs. no winding, gut vs. steel, and distance from the finger to the bridge have a huge effect on tone. There's also rate of change from up to down, suddenness of pressure changes, etc. etc. etc.

    Sure you could get closer than a lot of people would notice though.

    It is interesting that as people go out less-and-less to hear music and become hear more and more virtual instruments, they have become (even people that listen to a lot of music) accustomed to the sound of very good virtual instruments and have normalized to them.

    The best TV and film composers with limited budgets have figured out how to play/program their virtual instruments in ways that avoid their weaknesses. So, they sound very convincing -- oftentimes indistinguishable from the real thing EXCEPT that they have to stick to those articulations and ways of phrasing that sound convincing essentially by limiting the vocabulary.

    And so a lot of people have forgotten what listening to a fuller vocabulary is like.

  • If you are an active player of an acoustic instrument you become so much more familiar with the sound that it's very hard to be comfortable with emulations, except maybe buried in a mix.

    I can only stand iOS guitar simulations if I put my brain on the shelf and think of them as a totally different instrument. Yonac Steel Guitar is the only one I can stand ... probably because I can't play slide guitar for real to save my life!

  • @wim said:
    If you are an active player of an acoustic instrument you become so much more familiar with the sound that it's very hard to be comfortable with emulations, except maybe buried in a mix.

    so true... :+1:

  • @wim said:
    If you are an active player of an acoustic instrument you become so much more familiar with the sound that it's very hard to be comfortable with emulations, except maybe buried in a mix.

    I can only stand iOS guitar simulations if I put my brain on the shelf and think of them as a totally different instrument. Yonac Steel Guitar is the only one I can stand ... probably because I can't play slide guitar for real to save my life!

    100%, including the steel guitar comments. :)

    @McD said:
    This is a trivial problem that can be solved by a smart high school kid on his phone... in 2050.
    Until then... it's beyond our current technology.

    Think this is going to be my new go-to reply in work meetings. Thx. :+1:

  • Impaktor and Objeq are the only appsI know of. The audio - via mic or pickup - from you tapping your desk to physical models of percussion.

  • edited January 2020

    @Treaszure said:
    Hello all!

    There is various boxes from EHX and also Jamorigins MIDI Guitar 2 where audio is transformed to MIDI (no clue what EHX is doing).

    No MIDI on those EHX boxes.

    Bill Ruppert as you know, has done the majority of demos for those EHX boxes. Before that, he was using a Roland VG-99 - a device that transforms guitar string audio into various modeled sounds - Roland GR300 guitar synth, organ, DX7 style piano, sitar, etc.

    From what I recall, he later got a Kemper Profiler, and started making profiles of his VG-99 patches in the Kemper. Somewhere along the line he and/or associates at EHX discovered the "Profiling" concept works with other stuff too. So that Mel9 pedal for example has "profiles" made from actual Mellotrons - if I read Ruppert's posts correctly. I don't know what happened them, perhaps my Google skills have weakened with time. Nobody outside Kemper seems to want to talk about how "profiling" works - could be some variation on IR tech?

    A cheaper way to mimic violin and flute sounds on guitar is to get an E-Bow and learn how to use it.

    For about $300 you could get a Gizmotron 2.0 - but whether it's worth the cost and effort to install it has to be up to you. Good question as to why not just get an EHX Mel9 then...

    You probably don't want to follow my path. I bought a real viola and took lessons on it, and later added electric violins to my rig. There were times where I questioned my choice, heh. I guess I like to do some things the hard way.

  • edited January 2020

    Other non-MIDI apps/devices based on polyphonic pitch tracking: Yonac's Roxsyn app, Boss SY-300 and SY-1000, Meris Enzo...

    I get what you're saying though. Get past the polyphonic pitch tracker and feed the sound through a set of formant filters. As long as the voice can generate a passable envelope, I think this has potential:

    https://www.soundonsound.com/techniques/formant-synthesis

    I have read though that in many cases it is the initial attack and subsequent performance (controlled modulation) rather than the physical model that makes it what it is.

Sign In or Register to comment.