Audio Recording, Electronic Methods
The mechanical method of recording sound was invented by Thomas A. Edison in 1877. With the help of mass production of recorded copies for entertainment, cylinder phonographs, and then disk phonographs developed into a major industry during the first quarter of the twentieth century. However, electronic amplification was not available. Modern research has shown a few examples of pneumatic amplification, and there were also experiments using electronic approaches.
The first published record known to have been made with the help of an electronic amplifier was Guest and Merriman’s recordings at the Burial Service of the Unknown Warrior in Westminster Abbey on November 11, 1920 (specifically for the Abbey Restoration Fund and not a commercial issue).
Several components were needed by experimenters: a microphone, an electronic amplifier, a loudspeaker system to hear what was being recorded and to avoid overloading, and an electromechanical device which would faithfully transmute the amplified signal into mechanical vibrations for cutting a groove in wax with negligible added background noise. Cutting records and the means for pressing sturdy disk records were commercial trade secrets at that time.
The vital breakthroughs occurred at the American Telephone and Telegraph Company (AT&T) after World War I, when their research section Bell Laboratories began studies for improving transcontinental telephone communication. E.C. Wente developed a microphone in 1916, and in 1924 Henry C. Harrison developed an elaborate theory of ‘‘matched impedance’’ for sending speech long distances without losses. He realized that the same principles could be used for the faithful reproduction and recording of mechanically recorded sound. His team translated these electrical studies using ‘‘analogies.’’ For example, an electrical capacitance might be regarded as analogous to a mechanical spring, or the springiness of air in a confined space. Based on these analogies, the team designed a complete system, including microphone, amplifier, loudspeaker, and cutter; and an improved mechanical reproducer for judging the results.
The ideas were marketed by AT&T’s manufacturing and licensing company Western Electric. As far as can be established, the Western Electric recording system was first used commercially in New York on February 25, 1925. A few earlier examples are now known, and some have been published.
The Western Electric amplifier was originally developed for public-address purposes, and it used electronic valves, or vacuum tubes. Several other recording systems with electronic amplification immediately appeared, demonstrating that parallel research had occurred; but a relatively small amount of electronic amplification can often compensate for classical thermodynamic inefficiencies. Among the new systems were the inventions of P.G.A.H. Voigt for the British Edison Bell Company and Captain H. Round for the British Marconi Company, both of whom developed alternative microphones.
A microphone changes sound into alternating electricity and should introduce little distortion to the frequencies picked up. It should have the same frequency response in all directions, and its linearity to acoustic waveforms should be uniform. However, by the end of the twentieth century, electronic amplification had not enabled extremely faint sounds (detectable by a healthy human ear) to be recorded. There was always added random noise arising from acoustic causes, electronic causes, and inefficiencies in acoustical-mechanical transducers. Microphone users had to select an instrument suited to the proposed application.
Although the earliest optical film sound experiments probably did not use electronic amplification, both Western Electric and the merged RCA- Victor Company developed independent methods of recording sound on optical film with the help of electronics. Here, the two methods had to be ‘‘compatible’’ so that any film could be shown in any theater, and this situation remained true with only minor exceptions until the 1980s.
Film studios provided two new electronic techniques widely understood today but much less so before the 1960s. ‘‘Automatic volume limiting’’ protected the fragile ‘‘light valves,’’ with the side effect of improving speech intelligibility for cinema reproduction. The difficulties of recording foreign languages led to what is now called ‘‘multitrack recording,’’ in which two or more sounds recorded at different times could be kept in synchronism and modified as necessary. Hollywood evolved the principle of three synchronous soundtracks for music, sound effects, and dialog, to facilitate the adaptation of films for foreign markets.
The second technique was ‘‘negative feedback.’’ This principle enabled a high degree of amplification to be traded for other purposes by feeding some of the output (reversed in phase) back to a previous stage. In this process, nonlinearity and deviations in frequency response may be reduced. Mechanical transducers could be built with ‘‘motional feedback’’ with the same results. In sound recording, motional feedback was first used in disk equipment for the National Association of Broadcasters in America.
This was a different system from that used in commercial recording, and because of the outbreak of World War II, the commercial implementation of this principle was delayed until 1949. Cutting stereophonic disks would have been impossible, however, without motional feedback.
Classical analog information theory stated that frequency range could be traded against the need for amplification. At the start of the 1930s, both films and mechanical disk records were restricted to an upper frequency range of about 5 kHz. This range was gradually extended by trading amplification for electromechanical efficiency. It is generally accepted that the full audio frequency range was first achieved by the English Decca Company during 1944 (as a spinoff from war research). An upper limit of 14 kHz was obtained for microphones, amplifiers, and disk pickups.
During World War II, German engineers rediscovered a patented American invention for reducing harmonic distortion on magnetic media due to hysteresis. The principle of ultrasonic alternating current bias greatly improved the linearity and reduced the background noise of magnetic tape, causing much debate about precisely how the principle worked, which still has not received a complete explanation. In analog sound recording, a useful element of feedback happens with this process. If a small particle of dirt should get between the tape and the recording head, the bass goes down and the treble rises.
When the electronics are correctly set up, these effects cancel each other; and as a result, analog magnetic tape became the preferred mastering format for professionals. This format required even more amplification; but by the mid-1950s, magnetic tape was much simpler and cheaper than mechanical or optical recording, so it remained the favorite technology for analog recording until the end of the century. Digital audio recording using magnetic tape and optical methods began to overtake it after about 1980, but no fully digital microphone was developed.
Date added: 2023-10-02; views: 265;