Problems of the composer of computer music (or sonic art)

Firstly I need to say that the concept of “computer music” is kinda weird and perhaps it’d be more useful to speak of “computer aided composition of music”, but that’s
too long to say. “Sonic art” is also a concept loaded with meanings that could derail the actual topic here.

Traditionally there has existed 2 kinds of electronic music in the “art music” field : synthesized electronic music
and “concrete music”. What these terms really mean could be debated endlessly but I am more interested in the practical
problems about using artificially synthesized sounds versus using microphone recorded sounds when composing. (There probably has never
been this kind of strong division in the genres of popular music. Technology has been used as it has become available, and no deeper philosophical
meaning or dogma has existed. “What sounds good, is good”. Even if it sounds amusing and dated 20 years later.)

The composer who decides to use purely synthesized sounds can simply determine and realize the desired result by
specifying the absolute values of the parameters involved. In the most trivial case this could be for example by playing
the keys of a traditionally designed MIDI keyboard and a synthesizer then plays those pitches. It could also be possible to set up automation
envelopes in a DAW application to play an oscillator at determined Herz frequencies. Or it could be the score events in a Csound score. Of course the parameter
could be anything else possible : overall volume, volume envelope, spatial location of sounds, harmonic content etc. Anything can simply be set as absolute values, at
least in principle. Of course, technical problems exist with anything and for example with oscillators if high sounds are required, getting those to have only the desired
harmonic content can get problematic if the synthesizer used is not up to the task. Aliasing can result which then has altered the harmonic content, which was supposed
to also be a parameter under the composer’s full control! But this is a completely solvable problem, the composer simply needs to be aware of it and discover the
proper tools to use. The major problem with electronically synthesized sounds is that it is paradoxically very hard to make them interesting sounding *because* of
all the control available. The required amount of microscopic control data is massive and it is not uncommon for composers to simply ignore that. Instead
they might hope that the pitched notes and rhytms and the form of the piece they have chosen to use are interesting enough for the listener. Sometimes that might work, sometimes not.

The composer using microphone collected sounds however faces considerable problems if it is desired to get a sonic result that
follows some compositional intention, rather than letting the compositional intention follow from the properties of the sounds recorded.
There obviously isn’t anything inherently bad about letting the recorded sounds dictate what happens in the composition. But it can be
frustrating and limiting if that is the only option easily available. A recorded sound that sounds low in frequency content can’t sound high in frequencies
unless a transforming process is applied. Conversely a recorded sound that is shrill can’t sound like rumbling bass unless the sound is transformed by some means to do that.
This of course is stating a very obvious thing, but sometimes that’s useful to stimulate thought processes. Here again technological limitations are encountered for example
with the pitch register changing problem. Even the best and most expensive (either in monetary price and/or CPU load) pitch altering technologies start to destroy the
nature of the sound processed when altering the pitch level by just more than a few semitones. It depends of course on the source sound used. Human voices are very
sensitive material to suffer from this. Percussive/noisy sources might fare better. Again, this is no inherent evil and artifacts from pitch shifters when used with extreme settings
can even be very attractive sounds to listen to. But that is a byproduct of the technology used and not necessarily part of the original compositional intention. (“I want a high
register sound out of a bassy sound I had recorded…”). The good thing about recorded sounds however is that they are almost always rich and “organic” from the get go and therefore
potentially more interesting in their microscopic structure for the listener.

Obviously similar problems exist with other parameters besides the pitch/register. Volume levels of recorded sounds often
vary wildly and it can be complicated to cancel those variations out and then apply a compositionally desired envelope. Recordings may contain too much reverb/ambience when the composition
might require dry sounds at some given moment. And so on.

I have no Grand Solution to the (more or less obvious and already known) problems presented but I simply wish more solutions were available.

PS. Many composers of course have used both approaches to make sounds, synthesized and concrete. Maybe they have different pieces exploring each or mix them
together in one piece. To me, however, they seem very difficult to integrate and mix/match effectively. (Also in the bigger picture over several compositions as it’s difficult
to have any kind of unified approach to thinking about the composition processes.)

One might also ask why make a problem out of a thing like making high pitched sounds out of recordings that contain sounds of a low frequency sound source, wouldn’t the solution
be to then record something else, that *is* high pitched? Certainly it’d be easier in some ways but can also create another layer of complexity and difficulty of integration.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

2 Responses to Problems of the composer of computer music (or sonic art)

  1. the finger, and the mouth says:

    oh, well here you are agreeing with me, it is only one more step to assume that acoustic sound we may think of as “interesting for more than a moment” or “euphonic” is fractal in nature to the extent where we are capable to listen deep enough into the sound… a simple song can carry power if it has that depth, which i would also say amounts to microscopic changes as compared to the full range of any of the numeric parameters you refer to. imo, it is the only way to develop true fans of any music; those who will listen deep enough to derive the
    most meaning possible may tend to act like evangelists and that is how most famous music got noticed; the composer needs to put something in… work! however, i see no reason why a.i. algorithms cannot make educated guesses and learn any particular user’s workflow… but actually i haven’t seen this type of system working as well as i think it could yet.. i see no reason why audio software developers shouldn’t at least put some primitive logic into things to save keystrokes andsuch so that the composer can work harder on the music itself. “No reason” is rhetorical; imo, if the market demanded it, then of course devs would step up and deliver…

  2. the finger, and the mouth says:

    as for live recording, i haven’t heard about many true advances in microphone technology for about seventy years; considering that recordings don’t tend to sound like live performances, well.. that is what keeps the music business alive! recordings don’t make money, gigs do. so who want’s to change it?!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s