Mac Music  |  Pc Music  |  440Software  |  440Forums  |  440tv  |  Zicos
Share this page
Welcome visitor
Our Partners

The Practice of Mastering - 2 : Means, Functions and Underlying Logic

Mar 28, 2005 - by Dominique Bassal
Dominique Bassal's article "The Practice of Mastering in Electroacoustic Music". Second part : "Means, Functions and Underlying Logic"
Objectives of this section

    Objectives of this section :

    - to present to the reader the tools and techniques used in mastering studios for audio optimization;

    - to demonstrate basic reasons justifying the use of mastering, once past the objections based on abusive or spectacular theoretical positions.
    Listening - Systems

      Since the appearance of vinyl, mastering engineers have faced two major difficulties:

      - the large amount and variety of signals that cannot be cut;

      - the almost near complete absence of devices that can detect or measure these recalcitrant signals.

      In fact, the only available tool, at the time, to detect and evaluate potential problematic signals was… the human ear. Of course, it can only do so when it actually correctly hears the signal on tape. This forces mastering studios to improve by all possible means every element involved in monitoring systems. For example:

      - tape recorders' playback heads are often replaced with more efficient versions, ordered from specialized firms, and that can read up to 25-30 kHz;

      - a large part of the tape recorders' playback circuits is periodically replaced with high-performance electronics, as they become available;

      - consoles, often built on site, use top-notch electronic components that would be impossible to afford on multitrack consoles;

      - loudspeakers are considered essential for obtaining faithful reproduction. Some modern studios do not hesitate to spend more than $90 000 US a pair to acquire high-quality loudspeaker systems…

      - the acoustic environment is also very important. A lot of work and money is therefore invested in the design of studios

      In recording studios, which require a broader diversity of equipment, these particular systems are not as valued. In terms of acoustic design, for instance, recording facilities require the setting of multiple, well-insulated specialized spaces adapted to specific sound sources (drums, voice, piano, etc.). A mastering studio, on the other hand, will instead concentrate its resources strictly on the design of a neutral acoustic space in the control room.

      Human competencies

        From its early development, as we have seen in the first section, custom mastering became a distinct specialization. In order to understand why this happened, we must first look at the essential knowledge and qualities of a good sound engineer. He must:

        - know the acoustic particularities of the recording room used, and be able to anticipate the interaction between the instrument to be recorded and these particular acoustic features;
        - possess a detailed knowledge of the microphones used and their specific behaviour;
        - be able to easily interact with musicians and maintain a pleasant work atmosphere;
        - maintain an optimal gain level throughout the signal path.

        Few elements here could be considered as essential for the "making" of a good mastering engineer. Let's now look at the requirements of a competent mixing engineer. He must:

        - be able to keep an updated mental image of the whole mixer/external effects/patchbay configuration, the whole of which is often called the patch, and be ready to modify it at any moment without making any mistakes;
        - possess a confident taste, and often a taste that is able to integrate the current trends;
        - be able to identify what instruments have to be emphasized or set more into the background;
        - be able to quickly determine the overall balance of levels and timbres, before listening fatigue sets in.

        Again, one would hardly consider these abilities to be in any way useful to a mastering engineer. In fact, the numerous technical contingencies inherent to the work of the sound engineer and the mixing engineer tend to distract them from the vigilance that should be given to the overall characteristics of the sound stream. This vigilance therefore becomes the responsibility of the mastering engineer. He must above all demonstrate a sustained attention through time, so as to locate the subtlest problems. The mastering engineer must in addition be able to quantify these problems, be they a problematic frequency or an amplitude contour to control. He must also possess a long-term acoustic memory, so as to remain coherent with the type of audio profile commonly used in the particular musical style.

        As we can see, all these qualities can only be developed as part of a very precise mental configuration, based on the ability to crystallize in practice a specialized and highly controlled listening.
        Intervention skills

          These objective factors- material and human- combine to lead to the following situation: the overall sound (listening/monitoring) quality in mastering studios is always necessarily a couple of steps ahead of recording studios. The mastering engineer is in a position that allows him not only to perceive, but even to foresee or anticipate problems ranging from the most subtle features to the most general frequency characteristics, which were either simply unnoticed or impossible to hear for recording engineers and mixers.

          What is it exactly that the mastering engineer hears which escaped the attention of previous professionals? He usually does not know the project, and has never heard the music; this gives him a certain detachment, a perspective different from previous engineers. And unlike the sound engineer, who had to deal with each recording one by one, and the mixing engineer, who dealt with the interaction between each of the tracks, the mastering engineer can apprehend the product as a whole, with a global judgment over all of its sonic qualities. More specifically, he can hear:

          - deficiencies in the monitoring systems of recording and mixing studios, revealed through added or subtracted features in the signal;

          For example: a mixing engineer who hears a -6dB dip centered around 80Hz will react by systematically adding a few dB at 80Hz to the channels that have signals in these frequencies. He could also, on the contrary, decide to completely "clean" this frequency portion through radical filtering, since it does not seem to contain a strong signal.

          - he also hears errors due to acoustic habits developed during a project and which tend to affect judgment through a progressive habituation. It is very difficult for those involved in a project to avoid or detect such habits.

          For example: the sound engineer may record the first tracks of a project with slight excess in high frequencies. Throughout the following tracks, he will tend to continuously add brilliance to the sound, while being less and less aware of doing so, to maintain cohesion. Later, the engineer may eventually become alerted by signs of auditory fatigue, and start to systematically filter out the problematic frequencies.

          - he can even hear where further optimization might lead, once he has corrected these problems and he can work on an "opened" sound;

          - the mastering engineer will also compare the predicted result with his knowledge of similar products, not for plagiarism purposes – which is in fact extremely difficult to accomplish – but in an attempt to push the product towards the limits he knows to be possible to obtain in the specific sonic style.

          With time, some mastering engineers may even develop more impressive listening abilities. They can know for instance the strengths and weaknesses of main studios and engineers they commonly deal with. They also become able to anticipate the impact of specific sound compressors and enhancers of such or such broadcasters on their work. This ability gave rise notably to the radio mix, dance mix and other types of pressing optimized for very specific channels of distribution. The high costs involved in this type of refinement confined it to commercial production, which then totally succumbed to the level wars. This eventually led to a standardization of optimization processes, followed by a similar standardization of the processes of audio treatment of broadcasters, which soon made specialized pressings obsolete.

          Optimization - Tools used




            At the heart of the mastering studio is the small-sized console. The analog version is most often homemade (a single supplier, Manley, manufactures them to order only), and until recently combined almost every signal processing module. These modules (Neumann, Telefunken, etc), plug-in boards with controls on their front side, are designed specifically for mastering studios: their notched controls allow exact repetition, if needed, of any signal processing treatments.

            Nowadays, these integrated systems have been progressively replaced by a less homogeneous mix of modular digital audio consoles (above, the Daniel Weiss model), workstations and software, complemented by external hardware (Manley, Avalon, GML, Weiss, EAR, t.c. electronics, etc.) offering analog and digital signal processing. These devices are often "mastering versions" – again with notched controls – of high-end equipment also found in recording studios. Basic processing devices, be they plug-in cards, optional modules or separate units in fact fulfill a limited number of functions:

            - passive filters and fixed-frequency parametric equalizers, whose high cost is justified by a minimal phase displacement and an extreme precision in terms of centre frequency and amplitude correction;

            - compressors, limiters and expanders, also extremely precise and efficient, but whose process only becomes audible when used at extreme settings;

            - de-essers, more transparent and efficient than their "studio" equivalents. Even if they are not as necessary as with vinyl cutting, they nevertheless remain the only available tool to control sibilants.

            Although they are not used as much, a number of additional signal processing devices can be found in mastering studios:

            - reverberation units, mostly used to mask awkward transitions or inappropriate cuts which still sometimes escape the attention of mixing engineers;

            - acoustic simulators, used to flatten any difference in ambience between different pieces;

            - special effect devices, more commonly encountered in the era of the dance mix, but still used now and then.

            Finally, mastering equipment also includes a whole set of playback / recording devices in every format: magnetic (Studer, Ampex, Tim de Paravicini), 16 and 24 bit DAT (Panasonic, Sony, Tascam, etc), multitrack cassette (in Adat formats: Fostex, Alesis, etc. and Hi8 : Sony, Tascam, etc) and magneto-optical master recorders (Genex, Otari, Akaï, Studer, etc). We may also mention a variety of A-D and D-A converters, dither noise generators and sampling frequency converters (Weiss, Prism, dCS, Pacific Microsonics, Apogee, etc), as well as all the equipment necessary for the transfer to media accepted by manufacturers, as we already mentioned.

            Methods

              Working methods in mastering can vary tremendously from one engineer to another, and even from one project to another. It would be futile to try to schematize all of this into a single process. A simple, short unstructured list may provide a good idea of the large number of possibilities:

              - some engineers start by trying to find the player / converter combination that best suits the product; it might even happen, although this is rarely the case, that this subtle process is considered to be sufficient;

              - equalization remains the tool par excellence. Used to compensate for the monitoring deficiencies of the recording and mixing studios and / or attention lapses of recording and mixing engineers, equalization flattens irritating bumps and unjustified dips. Corrections on the order of 9 – 12 dB over large frequency ranges are not rare;

              - when possible, equalization is also used to sculpt a more pleasant frequency profile, to emphasize or mask certain portions of the audio spectrum. Here, interventions are usually subtler: 1 or even 1/2 dB can be enough to achieve the desired effect;

              - still with equalization, we can mention the increase of extreme high / low frequencies, again often made necessary because of the listening deficiencies of previous studios;

              - dynamic control is also vital. If it is not a matter of winning the first prize in the realm of the square wave, a world of possibilities opens up. With a competent handling of attack and release time settings, threshold and compression ratio, one can:

              - choose to emphasize transients by "isolating" them from longer sounds;
              - help to "discipline" a dynamic behaviour that is too erratic or distracting;
              - bring to the light hidden or imperceptible signals;
              - obtain a diversity of other results less easily reducible to a literary description, but which are definitively part of the common aural experience of a majority of music consumers.


              - the expander is a subtler tool, used to enhance a too timid mix or to accentuate the music / silence contrast. It is also more and more often used to revive a mix that has been flattened, victim of a producer who could not wait for the mastering step to ensure its product a place on the Olympic podium of audio screaming;

              - the compressor / expander combination may seem paradoxical, but in fact the variation of the time settings of each functions allows the creation of an amplitude flux, a kind of internal breathing which can tone down ambiences that are too static. In the case of mixes with background noise located just under the hearing threshold, this combination becomes essential for compression not to boost noise over this threshold;

              - the limiter fulfills an essential role in mastering, usually entrusted to a dedicated device that is most often digital. One must know that, even if the definition may vary of what exactly digital overload is in terms of the number of successive samples at 0 dB, pressing installations inevitably reject anything that actually exceeds this limit;

              - sequencing, also called pacing, remains the most traditional function of mastering. It can be broken down into several tasks:

              - determining the order of execution of the pieces on the final product;
              - deciding on the duration of silence to insert between the pieces;
              - cleaning beginnings and endings, often botched during mixing sessions that are a little too… enthusiastic;
              - making sure, on a general level, that the listening experience will be a coherent and pleasant one. The mastering engineer will not hesitate, for instance, to modify the equalization of a piece that may appear correct on its own, but that does not "fit" in the whole frequency profile of the product. The same may apply to the overall perceived volume and the acoustic space.
              The arguments - An outside perspective

                As in any collective artistic enterprise, the recording of a musical project represents an emotional investment that is often exhausting. Some musicians, for instance, have a conception of the sound they wish to give to their instrument that can sometimes be quite… emphatic. If it prevails, this conception will quickly create problems of acoustic interaction with the next elements to be recorded. Inevitably, negotiations are then necessary, extra-musical considerations become prevalent, and errors in judgment build up. The sound engineer becomes at this stage saturated by the intensity of the experience, made stronger through dozens of repeated listenings of each piece. We then resort for mixing to a second engineer totally foreign to the project, specifically to take advantage of his fresh perspective on the whole. Moving to another studio is also a judicious choice: different tools, different acoustics, different possibilities.

                But mixing is also a very difficult and equally random operation. Whoever has had a chance to listen to a large sample of non-mastered mixes knows the amazing variety of bizarre and incongruous sounds that can be encountered. Why, or better, in what sense are these sounds bizarre and incongruous? Because they have been distorted by one or several successive faulty monitoring systems and squeezed through acoustic habits that are real vicious circles, these sounds are very far from the original artistic project as first conceived by its creators. In fact, they correspond to the will of no one: they are a non-human product, only consumable "as is" by connoisseurs of artistic vacuity, who once had to make do with an arsenal of expressive incompetence limited to out-of-tune instruments, skipped notes, toneless voices, erratic rhythms, non-existent arrangements and infantile harmonizations.

                But now, in the great tradition of performances pretending-to-be-voluntarily-deficient, what these enterprising minds had yet to discover was that a total denaturing of the sound could also mask the absence of artistic talent, while passing itself off as something complex and thought-provoking. And where tasteless customers will cynically look for musical products reflecting their identity, we will also find second-rate figures trying to build original careers by feeding them. These niches really have no interest in resorting to mastering…

                However, as far as the artistic contents of a project worth being faithfully reproduced are concerned, mastering remains in the logic of pertinence of this outside perspective. Taking over from the sound engineer, whose resources are exhausted, the mixing engineer looks for the combination of settings that will best communicate the proper energy of the recorded tracks. Next in turn, with "fresh ears", the mastering engineer will immediately perceive acoustic nuisances that prevent a general equilibrium, and his job is to clean the final product from these imperfections. When we add to this the specificity of the tasks to accomplish, the specialization of the tools and the skills necessary to accomplish them, it becomes clear why mastering is considered as critical a step as recording, mixing and manufacturing.
                Incorrect claims

                  Some rare mastering engineers have described their activity as a link between professional listening environments and the average listening conditions experienced by consumers. This ambiguous and demagogical way to describe mastering may result in an important misunderstanding. While it is possible, as we explained in the first section, to anticipate what effect the ABCD multiband compressor of the EFGH-FM radio station will have on a particular mix, to predict how it would "sound" in Mr. and Mrs. Smith's living room is a different story. In fact, "home" sound systems have in common only a number of weaknesses, in relation to professional installations: placed for better or worse in noisy and acoustically intrusive rooms, they all present a response deficiency in the extreme high and low frequencies, a lack of headroom and a slow transients response. However, their worst deficiency is the accentuated coloration of their frequency response curve, and there, no typical profile can be established! There are as many variables of quantity, shape and distribution of these dips and bumps as there are brands and models of players, amplifiers and loudspeakers, not to mention all the possible combination between these elements! The frequency response curves of five loudspeakers reproduced on the next page clearly account for this variety:






                  The curves above call for a few remarks:

                  - it appeared impossible, for some unknown reason, to find any performance data on low to average quality loudspeakers. Therefore the erratic curves shown all refer to high-end loudspeakers financially inaccessible to most consumers;

                  - the two top curves refer to the same product: the left one shows data collected in a laboratory, while the right one, data published by the manufacturer…

                  - even by restricting the reading of results to an "obsolete" range of 50HZ to 10kHz, it appears impossible to obtain a maximal difference of less than 10dB; needless to say, the specifications in the brochures were considerably more euphoric…

                  As we can see, there is no correspondence, no common point between the frequency fluctuations of these loudspeakers, and therefore one cannot design a palliative measure that could be applied to every one of them. A mastering engineer who would equalize in a way to compensate for the curve of a particular model would simultaneously aggravate problems of another model, or even of the same product placed in a different environment, as shown by the two curves of the loudspeaker D.

                  "Alternative" listening practices

                    This demonstration allows us to reject other strong beliefs concerning listening systems and practices that are supposed to enable one to bypass the mastering process:

                    - the shit box: while they remain largely unused, for good reason, in mastering studios, they are still part of the traditional tools encountered in recording studios. Yamaha NS-10M, ProAc and Auratone, for instance, are all supposed to present frequency response curves "representative" of consumer-level loudspeakers; this is in fact, as we have seen, pure superstition. One may wonder how engineers can extract any relevant information through the traditional back-and-forth switch between main loudspeakers and the shit boxes… Their only usefulness would be to reassure (while tricking them in a shameful way) inexperienced producers and musicians, disoriented by their first exposure to professional monitoring;

                    - the living rooms grand tour method is a particularly laborious variation on the same idea. It consists in the repeated listening of a mix in a series of "real" environments, generally the living rooms and cars of friends and acquaintances. A real nightmare, and totally useless: the auditory memory, whose short-term capacity is well-known, cannot draw any synthetic conclusion from this exposure to a series of listening situations, each erroneous in its own way, but all equally depressing;

                    - commonly practiced in electroacoustics, as we will see in the following section, the auto-mastering technique consists in the attempt at mastering a piece in the same environment in which it has been produced. There is no reason whatsoever to believe that this technique will create anything other than an actual exacerbation of the situation: the producer, totally deprived of any perspective, will eventually compensate a third time for the same listening errors made during recording and mixing;

                    - another favourite of independent production, friend-mastering, accomplished in a different although equivalent studio by the composer-producer or by a peer, exposes a mix realized on a coloured system to another non-professional set of treatments, controlled by a differently coloured monitoring. Advantages in comparison with auto-mastering are only a matter of inclination: does one prefer to reinforce particular problems, or to create additional ones elsewhere in the frequency spectrum? Your individual mileage may vary…

                    - the use of near-field monitors minimizes, in a certain way, the intrusive interaction of the room which nevertheless remains, in a space that has not been acoustically treated, too high to allow a non-compensatory work of mastering. And even if we assume that the monitoring system chosen is in itself reliable (!), the crucial problem of low-frequencies is not necessarily solved: we must then resort to the use of a subwoofer, which brings us back to the problem of room acoustics…

                    - headphones, even when they are of high-quality, come with their own problems. Since they operate on a very different acoustic mode than loudspeakers, they cannot guarantee any reliable transfer, especially in terms of stereo space. They also do not solve the problem of ultra-low frequencies, and their typical level of use, 110dB, can be harmful to the ear. Anyone exposed for more than half an hour a day to such a level risks permanent hearing loss.

                    We described here approaches that try to produce, outside of the mastering environment, a reliable emulation of consumer-level systems. We still have to present two more professional attempts, this time accomplished in mastering studios and even in conventional recording studios:

                    - special curves: instead of having to "verify" each mix in a consumer's situation, it seemed more practical to try to reproduce an average environment using an equalization curve applied directly to main monitors. A painful essay-error process aimed at designing such a curve has only brought arguable ameliorations in a very limited number of cases, and clear drawbacks in every other situation. This represents another stinging defeat, evidently because of the basic principle, but also because of the disadvantages related to the use of equalized monitors: phase problems, unstable global response, slowness of transients, etc.

                    - a studio design using diffusion (1), which claims to fulfill three objectives: it considerably extends the sweet-spot, it uses wall reflections to naturally correct weaknesses of loudspeakers, and it is closer to the level of reverberation prevailing in the average listening room. This ambitious acoustic conception turns out to be extremely complex to accomplish. Only the first objective is clearly fulfilled. Wall reflections rarely consent to exhibit the proper characteristics, and the large variety of conditions of reverberation remains resistant to any schematization.

                    ------------------------
                    (1) Contrary to focalization, the common practice that consists in pointing the loudspeakers towards a central point, to minimize acoustic intervention from the room, diffusion tries to use reflections by placing the loudspeakers in parallel with lateral walls.


                    Conclusion: possibilities and limits

                      The summing up of all these experiences and failures leads to the contemporary vision of mastering, which totally excludes the idea of the realization of an average of consumer-level systems, as well as any possible shortcut used to obtain a monitoring model of reference:

                      The only effective mastering must be made based on a flat listening (1), which is itself possible only in a correctly designed room, acoustically treated and perfectly insulated. A very high-quality amplification system must be used, producing at least 1000 watts per channel, and connected to true reference loudspeakers. A system of acoustic response analysis handled by a professional acoustician must then be used to visualize the final performance and make appropriate corrections.
                      ------------------------
                      (1)In the actual conditions, "flat" still involves a final difference of 2-3 dB in the frequency response curve at listening. This remains a possible source of errors in optimization.


                      The primary optimization work, aimed at the consumer level experience, must be based exclusively on information collected through this uncompromised listening, free of any mental compensation and any preventive modification. The secondary versions, used for other channels of distribution, are then derived from this initial version, and possess restrictions whose pertinence can only be guaranteed by a long-term experience.

                      - "But then", the clever reader may ask, convinced that he will thus confine us in a complex trap (and this pretension is betrayed by an eyebrow raised in an expression of false jollity) "what is the use of a flat listening optimization, since no consumer will ever experience it?" He then adds, final proof of ingenuity (though we already knew it, it was so obvious yesterday, at the Blajhpumpkin-pish-pish ceremony):

                      - "Each system imposes its own errors, cancelling most of the corrections made to the product!".

                      The following example sequence, which can be transposed to a large variety of problems, provides an answer:

                      1. the monitoring system on which product XYZ has been mixed presents a dip at 200Hz. The mixing engineer systematically accentuated this frequency throughout tracks that had content in this region;

                      2. non-mastered, this mix sounds muddy and boomy in every system except for the ones who have the exact same deficiency as the one used by the mixing engineer – these will sound just fine, and the ones which already have a bump at 200Hz – these will sound simply horrible;

                      3. the mastering engineer hears a surplus at around 200Hz: he compensates by removing a few decibels at this frequency;

                      4. once mastered, this product is pleasant in every system except for the ones that have a problem at 200 Hz. But this problem already appears in every product that the owners of these systems listen to: therefore they will not attribute the fault to this particular product. There is a second exception, although ironic: on the system that has been used for mixing, the mastered version will sound less pleasant… and the mixing engineer, if he is inexperienced, may deduce from this that the mastering engineer is incompetent!!!

                      From the preceding, we can easily derive a list of optimization laws, summarizing its possibilities and limits:

                      - benefits of optimization are integrally exportable only to other reference systems;
                      - it however ameliorates, to various degrees, the global experience in the large majority of listening situations;
                      - a more transparent listening gives a greater optimization effectiveness;
                      - there will always be systems - or at the very least frequency regions - in which its action will be nil or even negative.
                    About the author: Dominique Bassal
                    Electroacoustic composer / mastering studio / sound design
                    Reader's opinions
15:44 CEST - © PcMusic 1997-2024