HourGlass surround progress part 8

Finally did some tests with per-fragment panning!

At the moment I am not entirely sure how effective this is sound-wise. The panning movements within the individual fragments can get easily lost when many voices are playing at once or when the fragments are short etc…My initial hunch that the static pan positions per fragment are enough for HourGlass might be right.

The additional CPU load might not be too much though and the per-fragment panning can also be turned off. For now this is just doing a simple linear interpolation (straight line) between 2 pan coordinates. Allowing more complicated paths might be more interesting sonically, or not…

Making a suitable GUI for this may be quite involved. The solution which seems obvious at first, just letting the user draw a pan path is not really enough. Each fragment should be able to have its own panning path. But the user can’t really be expected to draw hundreds of paths etc…Now, there are solutions to this like the user drawing a few paths which are then assigned to the voices in a round robin fashion or randomly etc. But I need to think this further before making anything too complicated since it’s not entirely clear if the per-fragment panning paths even sound that interesting at the moment.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

9 Responses to HourGlass surround progress part 8

  1. Tom says:

    Interesting. Thanks a lot for trying!
    So I guess the ear is not sensitive enough to position to differentiate many sources and only locates the main ones (at least with a quite limited number of sound sources).
    Makes sense in an evolutionary/survival way. You don’t need to locate each leave in the forest by ear as long as you hear the one twig being stepped on by the tiger 😉

    If you think it’s not worth it to do per-grain panning please don’t spend too much time because of me 😉
    Maybe I’m actually looking more for a kind of particle-engine for sound where only a handful of sound-particles fly by than a full blown grain engine throwing hundreds of grains at me.

    As for the paths: I personally would not want perfect control over every grain but more a kind of “go in this general direction and let each grain have a certain offset from the path”. Similar to how particle systems work in 3D software. Or like a swarm – have one main path and the other grains follow it around. (Tone Carver once created a plugin like that (Boids: https://tonecarver.wordpress.com/boids/) but it turned out not as fascinating as one would have expected, maybe for the same reasons?).

    But again, if the effect doesn’t prove too interesting, I wouldn’t go crazy with the path-creation GUI 😉

    Thank you so much!

    Cheers,

    Tom

    • xenakios says:

      Well, I’ve so far just done some initial experiments, this might turn out more interesting in the end. I didn’t initially have for example per-fragment pitch envelopes, but when Oli Larkin suggested I add them, that turned out pretty nice. Also, the per-fragment panning movements so far required only about 50 lines of new code in the sound processing engine, so this didn’t add significant complexity to HourGlass yet. (Which is always good…) The GUI is of course an entirely different matter…(So far the per-fragment panning feature has no GUI.)

  2. Michael L says:

    I have sat with my field recorder in a rainforest, and meaningful sounds come from all directions. Two things are interesting: each sound is at least slightly different (even breezes or trees creaking) and each unique sound has its specific location. Sounds that move, change as they move. Being in the centre of such subtle diversity is quite enchanting. Something to think about….

    • Tom says:

      Yeah, that was my initial thinking too.
      But a Rainforest has much more “speakers” installed than a typical suround system… 😉

      • xenakios says:

        I think even 4 speakers can produce a pretty amazing sense of “surroundness” and “space” and “location”(*), if suitable sound processing techniques are used. I don’t think the upcoming HourGlass version achieves all of that, but based on some recordings I’ve heard, it’s something that should be attempted, anyway.

        (*) (If pinpoint precision of the sound’s location is required, then I guess an unreasonable number of loudspeakers will be required, though…)

      • Tom says:

        I totally agree, my comment was pointed at pure panning of the sound between basically 4 “real” sources probably not being enough for that.
        Did you experiment with more advanced positioning methods? I think wavefield installations use quite advanced calculations…

  3. xenakios says:

    I’ve only done simple amplitude panning in the HourGlass code so far. But I am sure I will experiment with Ambisonics, time difference panning etc in the future. (Probably going to be HourGlass 2 stuff…)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s