Music and Technology. Development

Technology has always been inseparable from music. The moment man ceased to make music solely with his voice, technology entered the scene. Some social philosophers argue that technology and musical techniques, content and meaning, develop together dialectically. Although technology has always played some role in music, an increasing technologization took place during the twentieth century, and some genres of contemporary music seem to be completely dominated by technology.

Already in the early twentieth century, an acceleration of the role of technology in music can be observed—a new ‘‘machine music’’ came into existence, electronic musical instruments were developed, and music composers seemed to turn into sound researchers. Recording engineers acquired increasing importance and the rise of studio esthetics had a significant effect on the listeners’ expectations in concert halls.

Shortly before World War I the Italians futurists demanded the rejection of traditional musical principles and their substitution by free tonal expression. This led to the design of noise instruments, the intonarumori. With the futurists and others, “electricity, the liberator’’ became the slogan of the day. Instruments like Thaddeus Cahill’s ‘‘telharmonium’’ (1906) tried to imitate the sound of a symphony orchestra, insinuating that the orchestra players might soon be made redundant.

The player piano lifted musical performance out of concert halls and transferred it to private homes. In the early 1920s Leon Termen built his ‘‘theremin,’’ an electrical instrument helping which the human hand seemed to conjure sound from the air. This process was based on obtaining audible frequency beats formed by the interference of inaudible high- frequency oscillations. Many inventors of that time tried to design an electronic organ.

In 1934 the American inventor Laurens Hammond developed the Hammond organ, an instrument with ninety-one small tone wheel generators with harmonic drawbars placed above the keyboard to permit the mixture of different tones. The instrument—easier to play than a conventional organ—proved immensely popular, as did the electric guitar, which in the 1950s and 1960s was to become the most important instrument of pop music.

The guitar’s amplification started in the 1930s in response to guitarist’s demands for their solos to be heard over the sound of big bands. It facilitated an expansion of traditional guitar solo techniques and allowed the implementation of new techniques resulting in new effects like sustained tone.

Shortly after World War II sound recordings, disks and audio tapes were essential for the origins of ‘‘concrete music’’ composed from altered and rearranged sounds from the environment. Many composers in the (analog) electronic studios were not satisfied with this, however. They aimed at producing new sounds by applying simple oscillators to generate electromagnetic waves, which could then be translated into pure sound.

Already in the mid-1930s the Russian physicist Evgenij Sholpo had applied the principle of artificially synthesizing an optical phonogram to his ‘‘vario- phone.’’ In 1945 the U.S. inventor John Hanert with his ‘‘electrical orchestra’’ attempted to give the composer control over the complete fabric of musical composition. A tone was broken down into its characteristics—frequency, intensity, duration or timbre. Hanert thus reduced music to its constituent elements and reassembled it into coherent musical structures. In the 1950s, the RCA engineers Harry Olson and Herbert Belar made great efforts to synthesize sound.

The apparatus they designed was, however, cumbersome and expensive. In 1966, Robert A. Moog started producing his Moog synthesizer using transistors and the technique of voltage control. He devised oscillators controlled by the amount of voltage that would alter the volume, pitch or overtones of the sound.

Although voltage-controlled synthesizers could produce a large variety of sounds and were immensely popular, their timbral capabilities remained limited. Computers and the ‘‘digital revolution’’ of the 1980s remedied this. Already in 1956 Lejaren Hiller and Leonard Isaacson at the University of Illinois had experimented with computer music and used calculated procedures to generate musical scores. A year later, Max Mathews at the Bell Telephone Laboratories in Murray Hill, New Jersey, produced the first computer-generated sounds.

The first experiments in digital synthesis were made in the mid-1970s; the ‘‘synclavier,’’ invented in 1977, constructed every sound from scratch. The sampling techniques of the 1980s enabled musicians to treat all sound as data: once sampled, anything could be reproduced and reshaped. In 1983, the establishment of MIDI, or musical instrument digital interface, enabled musicians to easily transfer digital information between different electronic instruments as well as between instruments and computers.

Does all this mean that the introduction of electronics and computers into music making during the twentieth century has led to an ever- increasing musical perfection and to an opening up of new creative fields for professionals and amateurs? Some technological optimists are of that opinion and many ‘‘art music’’ composers, too, regard the computer as a useful tool for enhancing their creative abilities and relieving them from routine work. Others come to a negative conclusion: they argue that the computer stifles artistic creativity, produces a trend towards uniformity, devalues intellectual and artistic skills and has brought about technological dehumanization.

All these controversies aside, the revolution in music making as a consequence of electrification and electronics has taken place in only a few music genres. ‘‘Art music’’ making has proved remarkably resistant to electronics and computers; many composers of ‘‘minimal music’’ even feel disturbed by electronic sounds and prefer their music ‘‘unplugged.’’

Apart from music making, recording has been of great interest for the development of music in the twentieth century. Although many music enthusiasts were fascinated by it, conductors often declined to make recordings because they objected to their poor quality and resented the cold atmosphere in recording studios inimical to artistic inspirations. With improved recording facilities, however, the situation changed.

In the 1960s Glenn Gould, the Canadian pianist, regarded the recording studio as the center of music making, relegating live performance to the fringe. Indeed, popular music from the 1960s onwards is unimaginable without the vast array of electronic studio equipment in existence; and other forms such as jazz would have developed differently without it.

This is because recordings captured improvisations, which are extremely difficult to write down; but recording has also influenced music making. Before the rise of sound recording, for example, most violinists used vibrato sparingly. Once sound recording had been introduced, vibrato adopted a compensatory role. It made it possible for violinists to overcome the limitations of early recording equipment, served to mask imperfect intonation and also helped project a greater sense of the artist’s presence.

Technology, particularly means of transportation like trains, cars and airplanes, have been an often recurring theme in both ‘‘art music’’ and ‘‘popular music.’’ Many composers of the early twentieth century wrote music to reflect a changing world. In the 1920s railways and particularly railway engines aroused the interest of many composers, as is documented in Arthur Honeggers Pacific 231, named after one of the fastest American railway engines of its time. In Pacific 231 the composer successfully transformed features like speed, dynamics, and energy into the language of music.

At the turn of the twentieth century artists also perceived the recently invented airplane as an esthetic event with wide-ranging implications for artistic and moral sensibility. Even more than in railways, artists and musicians transformed the airplane into a spiritual creation. The age of the airplanes was supposed to bring about unlimited individual mobility, peace, and harmony, but already before World War I it became clear that the airplane could also be used for destructive purposes.

The utopia of peaceful internationalism gave way to aggressive nationalism and the two world wars bear witness to the misuses of flight. All these different feelings are reflected in paintings and literature, but also in twentieth century musical compositions. The airplane in the American composer George Antheil’s Airplane Sonata (1921) manifests itself in machine-like driving rhythms and insistent ostinatos.

The German composers Kurt Weill und Paul Hindemith transformed Charles Lindbergh’s first transatlantic solo flight of 1927 into a dramatic tone poem (1929) and in 1945 the Czechoslovak composer Bohuslav Martini wrote his Thunderbolt P-47, a spectacular piece of program music, as an homage to the victorious U.S. Air Force. At the same time the American composer Marc Blitzstein wrote his symphony The Airborne about the glory but also about the terror inherent in aviation. In the second half of the twentieth century space flight also became a theme in music with generally positive connotations.

 






Date added: 2023-10-26; views: 219;


Studedu.org - Studedu - 2022-2024 year. The material is provided for informational and educational purposes. | Privacy Policy
Page generation: 0.018 sec.