Due to the ever-so-convenient method of writing ad-hoc code, HourGlass currently handles writing audio files when it renders like this :
The rendering code renders audio from the audio mixer and the results are passed to a function in the libsndfile library to write the buffer of audio rendered. Easy enough and works. However, if I needed to support audio file formats not supported by libsndfile or even wanted to drop using libsndfile (not likely, but who knows…), this type of arrangement would quickly start to suck. It would look like this if I wanted to add new formats not supported by libsndfile :
The rendering code would now need to know the details of each of these 4 ways to write audio files (which are completely different in this illustrative scenario). This clearly isn’t acceptable. The code really needs to be something like this :
The rendering code would only interface with a module (a C++ class, really) that can internally do whatever it has to, to implement the actual writing of the files. Of course this kind of design is more complicated to implement in code and the justification to do such a more complex design hasn’t existed yet. (While it clearly would be more future-safe.)
This is just a technical ranting/reminder to myself, not a commitment that HourGlass would support wavpack or mp3 or Monkey’s Audio files in the future. (I don’t even see how mp3 or Monkey’s Audio really would be useful, but perhaps wavpack could be…) These same points of course also apply to reading audio files for use in HourGlass, as well as the live output recording to an audio file.