Soundcrafting as Music Production and the Musical Producer
Every recording provides an interpretation from the stereo image’s control and design, which gives a sense of depth or space to a recording to the intimacy, warmth, or muffled quality of a recording on an individual track. A collaborator best serves the recording of a musical group with a shared vision for what the recording would sound like from when a project begins. The production process in school settings often requires students to play multiple roles and become what Tobias (2012) has called “hyphenated musician[s]” who must be able to think and act as songwriters, performers, sound engineers, recordists, mix engineers, and producers in ways that are recursive and often overlapping.
The key to this approach is the music teacher’s ability to establish an environment in which they become the facilitator of student learning, instilling a creative identity in their music students (Randles, 2012). The teacher’s knowledge and understanding of the techniques and tools needed to produce quality recordings affect the students’ ability to make recordings of their highest quality. Today’s music students live in a digital world surrounded by recorded music. If they wish to make a career out of doing what they love, they need to know how to use the technological tools available (Crishwell & Menashe, 2009). Thus, recording is no longer an additional skill that might be included within a holistic music education, but a foundational one.
Music production is a holistic process (Hunter, Broad, and Jeanneret, 2018) where students learn comprehensive musical skills that can develop skill-building (theoretical knowledge), planning and decision-making (self-organization), communication and collaboration (running a session), and refinement of ideas and products (postproduction). What defines a music producer and their involvement varies from project to project and is reliant on the musical goals and objectives of the project and capabilities of the students engaged in the process. To a lesser degree, it may also be dependent on availability and access to instruments and equipment.
The music producer has to make split-second decisions and help guide the recording process toward a shared vision of the final song. This requires the ability to give verbal feedback, and musical insights to the musicians in an informed manner that explains how the technology involved will shape their sound. This is often done when it may be difficult for those who are being recorded to hear potential outcomes that far ahead in the process.
The building of sound production skills may be best suited for older students in middle or high school who can oversee the technical, creative, and social processes simultaneously. That being said, there are many opportunities for younger students to begin learning these skills to lay a foundation for soundcrafting skills that increase in complexity as they mature. For example, while early elementary students may be overly challenged by a task that requires them to place microphones appropriately to record various instruments, they can, with the teacher’s careful planning and assistance, be led through a process of learning how microphone placement affects recording quality. Additionally, early elementary students typically encounter microphones only as a novelty available in a crowded auditorium when they are used to grab everyone’s attention or a source of piercing feedback. Allowing early elementary students the time and permission to play with and problem-solve with audio equipment can be crucial for future musical engagement.
Production Process.Soundcrafting comes to life through a multi-stage process. Not every music production process is the same, but it is rarely completed all at once. Thus, it is helpful to categorize the work involved. We present a six-step approach that includes: (1) songwriting or composition, (2) arranging; (3) tracking; (4) mixing; (5) editing, and (6) mastering. As other chapters within this book focus intensively on songwriting and arranging, we will focus solely on the aspects of tracking, editing, mixing, and mastering within recording production.
Tracking. Tracking is the process of recording the various instruments or voices that are used to perform a song. A tracking session can be divided into those that attempt to minimize the presence of the room(s) in which instrumentalists or vocalists perform, which we will call studio sessions, and those that record the room’s response as the instrumentalists or vocalists perform, which we will call venue sessions.
Studio sessions can be recorded one track at a time or simultaneously. If simultaneously, it is ideal to isolate the performers’ sounds from one another as much as possible, perhaps with each performer in a separate room or with some sort of partition between them. If the tracks are recorded sequentially, the isolation takes care of itself. Headphones are required for isolation. If you lay down tracks sequentially, you listen through headphones to all previously-recorded tracks while you record the new track. Alternatively, all of the musicians being recorded at the same time listen to one another through headphones while being recorded in an isolated environment. The resulting isolated tracks provide a lot of flexibility with everything that follows.
For venue sessions, the instrumentalists and vocalists’ overall sound is primarily captured by microphones arranged in stereo configuration. Stereophonic techniques will be described in more detail below; suffice it to say that two or more microphones have a fixed stereo pattern. Any other microphones used are designated as spot mics. Spot mics are supplemental tracks that can give limited post-processing options to the resulting recording, such as bringing out an instrumental solo at an opportune moment but must never overwhelm the stereo configuration’s microphones or the stereo image will be destroyed.
To record tracks well, a soundcrafter needs to develop a strong understanding of best practices for how to record each instrument they are working with. This understanding involves a theoretical or intuitive understanding of the physics of vibrating bodies and the methods by which the particular microphones record vibrations. This knowledge provides the options for best placements of recording equipment and microphones for each instrument or ensemble. Admittedly, this topic is vast and it is easy to become discouraged; where should one start? Classic resources to develop knowledge about recording include Rayburn (2012) and Roads & Strawn (1996). Keep in mind, though, that there is not a single best placement for any instrument. The most important skills to develop are avoiding distortion, getting a hot signal, and correct stereophony techniques. The easiest stereo patterns to master are staples of the repertory: XY pattern and Blumlein pattern. Numerous quick tutorials on these patterns are available online.
While STEM knowledge plays a role, students should also experiment, using their ear and weighing the results of a recording session with a particular method. In addition, the producer must know how to structure a recording session to benefit the musicians, maximize the use of studio time, and to ensure the highest quality recordings are made. Thus, it also requires strong social skills and the ability to speak in musical terms to the musicians who are recording.
Editing. Editing is the process through which recording sounds are manipulated to improve overall sound quality. While advances in the ease and capabilities of capturing a great performance through tracking is easier, it is best to use these tools as a fallback, not a go-to. The amount of work a producer does editing is dependent on the quality of the recordings captured. Learning the skills of how to edit recordings ensures that the producer and the musician’s vision for the recording can be met without being dependent on just the tools of tracking. Common elements that may be edited include removing breaths, cough, ringing of the phone, or any other unwanted interference; adding music intros and outro, stretching or shortening audio and sound effects, splicing together various tracks or elements of the music recorded separately or at different sittings, syncing up different musical instruments so that they all sound on the beat, pitch processing, and looping, slicing, and editing beats. There is not one single method of editing music that will work with every artist and every situation. There are many factors that lead down the trail of making the best decisions, but almost always involves the application of filters, dynamic compression, and gain. There are a variety of secondary techniques that are sometimes included, such as noise gating, panning, and reverberation.
Mixing. Mixing is the process of combining multiple tracks into another single sound file. Software devoted to mixing are often called sound montage or digital audio workstations (DAWs). It is computationally simple to mix sound files1 but it is a mathematically difficult problem to decompose them into the state that they were before mixing. Thus, we have a mixdown or bounce that combines the files but there is no mixing up to take them apart. Therefore, DAWs hold a session with information such as the temporal relationships of the various component tracks and processing of these tracks in a so-called non-destructive fashion, meaning that the sound files have not been saved after they have been mixed. Destructive editing yields a single sound file result (it has been mixed). Software that traffics in destructive editing tend to be called sound file editors. There is significant overlap between montage/DAW programs and sound file editors, but DAWs tend to refer to computationally-intensive non-destructive sound file alterations, whereas sound file editors tend to deal with computationally-light destructive changes. This means that montage/DAW programs contain greater flexibility.
Mixing can be broken down into three categories: general mixing, moderate mixing, and fine mixing. Each category continues the refinement process. General mixing includes determining which recorded track on a particular instrument is the best overall then determining if there are better performances of certain sections of the songs that can be mixed into the preferred track. For example, it may be that the second recording is the overall better recording, but that the bridge section of recording three was the best for that position of the song. General mixing requires decision-making that the general editing of the best portions of each recording track be edited together. As individual perfected tracks are made, they need to be considered in relationship to other tracks that are also being perfected. This analysis between tracks and how well they fit together continues until the producer feels they have the best possible edited mix of all recordings.
Moderate mixing is completed after general editing, when the producer finds particular phrases, notes, or words that need a bit more attention. Fine mixing happens after you have a complete track that is working to a large degree. The purpose of fine mixing is to select a portion of the song and listen to it at the “big picture” level, meaning listen not to individual tracks but, instead, to the ways the tracks work or don’t work together. Moving through chunks of the song and fine mixing can help producers avoid the trap of becoming stuck in the minutiae of general and moderate mixing.
Mastering. Mastering is the process of taking an audio mix and preparing it for distribution or performance. Mastering involves taking a collection of nearly complete mix sessions and optimizing the resulting destructive mix for listening to these sound files together (e.g., as an album), listening in relationship to other similar albums (e.g., what is the mean amplitude of my drum and bass album in relationship to Aphex Twin Hangable Auto Bulb?). The process of mastering requires listening to the final mix on various speakers, from fine studio monitors to the car stereo with blown-out woofers and considering whether the mix still sounds good. Each set of speakers applies a distinctive coloration or filtration to the mix. There are various methods available; one can have multiple sets of speakers available at the computer where mixing takes place, one can carry on some medium to be played in various locations, or one can attempt to simulate the result of filtration electronically.
While the ideal situation is to have multiple speakers available (with at least one set being accurate), simulating speaker coloration can be achieved with software. An advantage to this curricular path is that filters are an excellent didactic topic; these are the building blocks of what is popularly known as equalization or EQ. Filters are achieved through delaying copies of the signal. The mechanism involved may not immediately be obvious: vibrations in air involve areas where air molecules are bunched up, called compressions, areas in which air molecules are less numerous, called rarefactions, areas in which air molecules are equally spaced, where they are said to be in equilibrium, as well as situations that mediate between these states.
When sound waves, alternating between compression and rarefaction, meet, they are superposed, meaning that their states are combined or added together. Thus, a period of maximal compression and maximal rarefaction that meet create a state of equilibrium—a phenomenon called phase cancellation. Such cancellations or interferences arise in a predictable and quantifiable fashion; the name of this predictive model is a linear time-invariant system. Their uses are at least as numerous as the lauded sinusoid tone. As computer-music master Julius Smith said: “When you think about it, everything is a filter” (Smith, 1985, 13).
Date added: 2025-04-23; views: 4;