Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

Investigating Methodologies of Sound Synthesis

edited December 2019 in Other

I had an idea/question that came to mind while I was contemplating the existing methods used to produce, shape, and modulate waveforms in sound synthesis.

I was thinking about the different ways current synths combine multiple waveforms. And I realized that the combined voltage fed into the output stage of an amplifier is essentially a singular "Sum", of the entirety of all the waveforms produced by whatever number of oscillators, wave shapes, LFO's, and harmonics created by filters and such. The same would also be true of FM synthesis.

I used to think of waveform synthesis as multiple waveforms overlapping. In other words, if one oscillator was producing a square wave and another a sign wave. I was envisioning two separate waveforms superimposed one upon the other.

But now I'm realizing that's not true. It's the sum of those two waveforms that is in fact created. The voltages are added or subtracted relative to the a singular point in time in the cycle of the phase of each. The sum of the two, are the value of the voltages when combined.

This is analogous to sound existing as a rapid fluctuation of variable pressure waves traveling through a conductive medium such as air or water.

Now I have a question???
• Imagine four analog oscillators each producing a basic waveform shape. Each waveform is sent into a circuit that mixes them, then outputs the result to an amplifier section powering a speaker.

The result is a sound.

• Next imagine four analog oscillators each producing the same four basic waveform shapes. But in this case the four waveforms are each sent to one each of four amplifier sections each powering its own individual speaker.

Let all four speakers be arranged in a grid of 2x2, and let the edges of each speaker be 1mm apart.

Given the total wattage of sound output will be the same for each portion of the experiment.

The question is....
If you stand back and listen to the single speaker playing the mixed output...

Turn that off...

Then turn on the four speakers each playing one waveform that is each fed to it's own single speaker.....

Will the they both sound the same to a person listening at a distance?

«134

Comments

  • Isn't it the same as hearing the mix and then hearing the master? I would say no, the sound is not the same

  • You'll get some phasing effects with multiple speakers, so no they won't sound the same.

  • edited December 2019

    I think a better experiment would be to use the same 2x2 speaker placement for both scenarios.

    Even then, I think a combination of constructive interference and phase cancellation would do enough to be able to differentiate between them.

  • It's complex -- really complex. The ear(s) are incredibly good at location information and pulling out details of the environment from sound. A book I really like about this kind of thing and more is "The Physics and Psycophysics of Music" by Roederer. Really fascinating stuff.

  • edited December 2019
    The user and all related content has been deleted.
  • The user and all related content has been deleted.
  • If it were theoretically possible to have all of the speakers in the same position (which isn’t completely impossible, you’d need concentric cones and concentric voice coils, but never mind the practice, this is theory), then yes, they should sound as good as the same. The air space becomes the equivalent of a mixer.

    If there were two oscillators, and they were creating a tone but they were 180° out of phase, then sending them to an electronic mixer will introduce the cancellation of the two. Similarly, the push and the pull on the same air will give the same result, in theory.

  • The user and all related content has been deleted.
  • The user and all related content has been deleted.
  • edited December 2019
    The user and all related content has been deleted.
  • The user and all related content has been deleted.
  • The user and all related content has been deleted.
  • edited December 2019

    @u0421793 said:
    If it were theoretically possible to have all of the speakers in the same position (which isn’t completely impossible, you’d need concentric cones and concentric voice coils, but never mind the practice, this is theory), then yes, they should sound as good as the same. The air space becomes the equivalent of a mixer.

    If there were two oscillators, and they were creating a tone but they were 180° out of phase, then sending them to an electronic mixer will introduce the cancellation of the two. Similarly, the push and the pull on the same air will give the same result, in theory.

    Thanks. This is the way I meant for the experiment to be understood.

    It's about looking at the physics of how various sound waves mix in the air.. Vs... Waveforms being mixed by a very accurate electronic mixer before being being transformed into sound waves that will travel through the air.

    Think of an analogy using light (which is an electromagnetic waveform).....

    (Note edited because blue and yellow light do not make green light.. That's a paint thing)

    If we take a spotlight that produces a daylight color temperature of 5500K.

    Then put a transparent RED filter (a filter Gel) in front it, a person viewing the light at a distance will see a RED light.

    But if we then put an additional BLUE filter Gel in front of the RED filter, a person viewing the light at a distance will see a MAGENTA light.

    RED mixed with BLUE make MAGENTA.

    How are sound waves in air similar, or different, than light waves?
    If they behave differently, then why?

    It's important to keep the experiment simple, so imagine that once the sound is produced from it's source there is nothing that will interfere with it.

    Let both the Speaker(s) and the listener, both be floating magically 500 feet up in the air, with no ambient noise, and completely still air.

  • The user and all related content has been deleted.
  • edited December 2019
    The user and all related content has been deleted.
  • I don’t believe multiple speakers would ever produce the same sound as a single speaker because location in space of the speaker/speakers and listener affect the perceived sound.

    I think if you moved far away enough from your initial hypothetical speaker/speakers, they would eventually sound the same though.

  • edited December 2019
    The user and all related content has been deleted.
  • @Max23 said:

    if we are somehow floating in the air there won't be much reverb/ cluster of delays as there is almost nothing to reflect the vibration , the further away it is the less high end and panning it will have,
    depending on how u think there is no reverb at all as we are into this void of air

    so its just a signal that has delay depending on positions and some location around you (place where u hear things)
    if u do it with 4 speakers
    you have 4 delayed versions of the same thing - this is going to phase

    But each of the four speakers is not playing the same thing.

    Each speaker is playing its own waveform (square, sign, saw, triangle).

    Each waveform could be the same amplitude.

    But each waveform could be at a different frequency before either being sent to the electronic mixer then the single speaker experiment.

    OR the experiment where each of the four waveforms is sent to its own Amp, and then to its own speaker.

    The purpose of the experiment is to compare the sound of four waveforms sent to four different speakers VS. the same four waveforms electronically mixed then sent to a single speaker.

  • @CracklePot said:
    I don’t believe multiple speakers would ever produce the same sound as a single speaker because location in space of the speaker/speakers and listener affect the perceived sound.

    I think if you moved far away enough from your initial hypothetical speaker/speakers, they would eventually sound the same though.

    This is what my common sense tells me would happen. But in physics things don't always happen the way you might think.

    My underling purpose for this thought experiment is for finding ways to think about how different waveforms will behave when mixed together to create synthesizer sounds.

  • edited December 2019
    The user and all related content has been deleted.
  • @horsetrainer said:

    My underling purpose for this thought experiment is for finding ways to think about how different waveforms will behave when mixed together to create synthesizer sounds.

    It's a good idea. It's kind of the same thing as how orchestration works and why different groups of instruments have different numbers and placements.

    You could do it in a virtual space.

  • @Max23 said:
    Each speaker is playing its own waveform (square, sign, saw, triangle).

    Each waveform could be the same amplitude.

    so every speaker has a different waveform, on different pitch, and perceived levels are the same

    not much gonna happen

    depending on were u stand stuff is louder or softer has more or less delay and u can kind of walk around in it

    Is the sound from the four speakers, similar to the sound from the single speaker playing the four electronically mixed waveforms?

  • edited December 2019
    The user and all related content has been deleted.
  • edited December 2019

    At risk of derailing the thread, consider this: Binaural beats. https://en.wikipedia.org/wiki/Beat_(acoustics)#Binaural_beats

    Binaural beats are created when we feed the left ear with one frequency and the right ear with a similar but different frequency such that they should, if they were electronically summed to mono, produce a beat frequency in the same way that most of us here are perfectly used to when we tune two oscillators very close together.

    However, in binaural beats, the slightly detuned similar signals are never mixed, they are kept pure and separate.

    What happens is that they form the beat heterodyning in our head! Arguably, this dichotic effect is an illusion.

  • The user and all related content has been deleted.
  • edited December 2019

    PART 2....

    If we can agree that "most" synthesizers basically work on a principle of "combining different waveforms" using various processes and methods, and the resulting output is a new more complex waveform that consists of the "sum total" of all waveforms used in its creation......

    Then....
    If one was to design a computer algorithm that did not introduce another "waveform" (per se) to be added and mixed , but was instead designed to make calculated alterations to a "final" waveform shape in any fashion that resulted in a modified waveform that is not a product of mixing waveforms..... What would one call such a process? Does such a thing exist?

    For example.... Imagine you are looking an oscilloscope image of a complex synth sound waveform. But lets give this oscilloscope the power to be and editor.

    Just for fun, lets use the editor to "insert snippets" of a wave shape, like for example... a triangle wave. Just one triangle wave... like this "^v"... The triangle wave could be the same frequency as the waveform it's being entered into. But it's just added once. What would this process be called?

    I don't think it would be wavetable because it's way too short of a wave. Not sure it would be granular either.

    My imagined purpose for it would be for modifying waveforms by introducing various shapes of simple or complex "snippets of waveform" into an existing waveform using an editor that can add modifying "snippets" according to programmable algorithms or any other type of methodology including direct editing into a waveforms timeline.

    Another use could be for creating completely synthetic waveforms built up from chaining same frequency (or different frequency) snippets together. In a way I think that would look like a form of "Micro Granular" synthesis.

    Also for fun, lets design a new waveform where we take thousands of different waveform snippets, and chain them all together one after the other. What could that sound like?

    Just thinking out loud about these things.....

  • edited December 2019

    @horsetrainer said:

    @CracklePot said:
    I don’t believe multiple speakers would ever produce the same sound as a single speaker because location in space of the speaker/speakers and listener affect the perceived sound.

    I think if you moved far away enough from your initial hypothetical speaker/speakers, they would eventually sound the same though.

    This is what my common sense tells me would happen. But in physics things don't always happen the way you might think.

    My underling purpose for this thought experiment is for finding ways to think about how different waveforms will behave when mixed together to create synthesizer sounds.

    Well it depends how technical you are being. Perception is very different from what technical measurement shows is happening.
    Are you concerned with math, physics, etc. or how it actually is perceived, how it actually sounds?

  • @horsetrainer said:
    PART 2....

    If we can agree that "most" synthesizers basically work on a principle of "combining different waveforms" using various processes and methods, and the resulting output is a new more complex waveform that consists of the "sum total" of all waveforms used in its creation......

    Then....
    If one was to design a computer algorithm that did not introduce another "waveform" (per se) to be added and mixed , but was instead designed to make calculated alterations to a "final" waveform shape in any fashion that resulted in a modified waveform that is not a product of mixing waveforms..... What would one call such a process? Does such a thing exist?

    For example.... Imagine you are looking an oscilloscope image of a complex synth sound waveform. But lets give this oscilloscope the power to be and editor.

    Just for fun, lets use the editor to "insert snippets" of a wave shape, like for example... a triangle wave. Just one triangle wave... like this "^v"... The triangle wave could be the same frequency as the waveform it's being entered into. But it's just added once. What would this process be called?

    I don't think it would be wavetable because it's way too short of a wave. Not sure it would be granular either.

    My imagined purpose for it would be for modifying waveforms by introducing various shapes of simple or complex "snippets of waveform" into an existing waveform using an editor that can add modifying "snippets" according to programmable algorithms or any other type of methodology including direct editing into a waveforms timeline.

    Another use could be for creating completely synthetic waveforms built up from chaining same frequency (or different frequency) snippets together. In a way I think that would look like a form of "Micro Granular" synthesis.

    Also for fun, lets design a new waveform where we take thousands of different waveform snippets, and chain them all together one after the other. What could that sound like?

    Just thinking out loud about these things.....

    This description is very similar to Korg Wavestation, except you want to use single cycle waves mostly.

  • @Max23 said:

    @CracklePot said:
    I don’t believe multiple speakers would ever produce the same sound as a single speaker because location in space of the speaker/speakers and listener affect the perceived sound.

    I think if you moved far away enough from your initial hypothetical speaker/speakers, they would eventually sound the same though.

    location is just delay here
    and hm some kind of perceived direction
    if you double this up it will sound different ...

    u have 2 ears
    your girlfriend drops a bottle
    sound wave hits both ears with a few milliseconds difference
    that's how our brain makes up position ...

    I am taking more about phase and how it relates to distance between speakers, not directionality.
    And distance is not just delay. If your distance from the hypothetical speaker array is great enough, the distance between the individual speakers would become less significant.
    Also high frequencies don’t travel as far as low frequencies, so the affect of minor phase discrepancies will not carry over longer distances.

  • @Max23 said:

    @StudioES said:
    Try a 1x12" guitar cab vs a 4x12" guitar cab. Not exactly the same, but similiar.

    Do a sound check in an empty room. Then try it when the room is full of people. Where'd the treble go?

    temperature is the same.
    play in a cold empty room vs a room full of dancing ppl

    I would say the bodies would affect the sound more than the temperature in this example.
    Temperature affects air density, the same as altitude above sea level. Air is the medium, so its density will affect the behavior of sound.

    Human bodies absorb the high frequencies. That is where the sound check treble goes during the show.

Sign In or Register to comment.