Alloy Steel. Other Alloys

The development of alloy steel has its origins in the crucible process, perfected by Benjamin Huntsman in England around 1740. By melting bar iron and carbon in clay pots and then pouring ingots, Huntsman created superior steel with carbon uniformly dispersed throughout the metal. Used for cutlery, die stamps and metal-cutting tools; crucible steel was the first specialty steel.

In 1868, Robert F. Mushet, the son of a Scottish ironmaster, found that the addition of finely powdered tungsten to crucible steel while it was melted made for much harder steel. Suitable for metal cutting tools that could operate at high speed, Mushet tungsten tool steel was the first commercial alloy steel. The English metallurgist and steelmaker Sir Robert Hadfeld is generally considered to be the founder of modern alloy steel practice, with his invention of manganese steel in 1882. This steel, containing 12 percent manganese, has the property of becoming harder as it is worked. This made it ideal for certain types of machinery, such as digging equipment.

Hadfeld also invented silicon steel, which has electrical properties that make it useful for building transformers. His work showed conclusively that the controlled addition of alloying elements to steel could lead to significant new specialty products. Hadfeld’s discoveries, which were well publicized, led many other engineers and steelmakers to experiment with the use of alloying elements, and the period between about 1890 and 1930 was a very active one for the development of new alloys.

The first highly systematic investigation of alloy steels was carried out by Frederick W. Taylor and Maunsel White at the Bethlehem Steel Works in the 1890s. In addition to testing various alloy compositions, the two men also compared the impact of different types of heat treatment.

The experiments they conducted led to the development of high-speed steel, an alloy steel where tungsten and chromium are the major alloying elements, along with molybdenum, vanadium and cobalt in varying amounts. These steels allowed the development of metal cutting tools that could operate at speeds three times faster than previous tools. The primary application of high-speed steel during the twentieth century was for the manufacture of drill bits.

Military applications were also a major factor in the development of alloy steels. The demand for better armor plate, stronger gun barrels, and harder shells capable of penetrating armor led to the establishment of research laboratories at many leading steel firms. This played a significant role in the development of the science of metallurgy, with major firms like Vickers in the U.K. and Krupps in Germany funding metallurgical research.

The most notable discovery that came out of this work was the use of nickel as an alloying element. Nickel in quantities between 0.5 and 5.0 percent increases the toughness of steel, especially when alloyed with chromium and molybdenum. Nickel also slows the hardening process and so allows larger sections to be heat-treated successfully.

The young science of metallurgy gradually began to play a greater role in nonmilitary fields, most notably in automotive engineering. Vanadium steel, independently discovered by the metallurgists Kent Smith and John Oliver Arnold of the U.K. and Leon Guillet of France just after the beginning of the twentieth century, allowed the construction of lighter car frames. Research showed that the addition of as little as 0.2 percent vanadium considerably increased the steel’s resistance to dynamic stress, crucial for car components subject to the shocks caused by bad roads.

By 1905, British and French automobile manufacturers were using vanadium steel in their products. More significantly, Henry Ford learned of the properties of vanadium from Kent Smith and used vanadium alloy steel in the construction of the Model T. Vanadium steel was cheaper than other steels with equivalent properties, and could be easily heat-treated and machined. As a result, roughly 50 percent of all the steel used in the original Model T was vanadium alloy.

As the price of vanadium increased after World War I, Ford and other automobile manufacturers replaced it with other alloys, but vanadium had established the precedent of using alloy steel. By 1923, for example, the automobile industry consumed over 90 percent of the alloy steel output of the U.S., and the average passenger car used some 320 kilograms of alloy steel.

The extensive use of alloy steels by the automobile industry led to the establishment of standards for steel composition. First developed by the Society of Automotive Engineers (SAE) in 1911 and refined over the following decade, these standards for the description of steel were widely adopted and used industry-wide by the 1920s, and continued to be used for the rest of the century.

The system imposes a numerical code, where the initial numbers described the alloy composition of the steel and the final numbers the percentage of carbon in the steel. The specifications also described the physical properties that could be expected from the steel, and so made the specification and use of alloys steels much easier for steel consumers.

One of the goals of automotive engineers in the 1910s and 1920s was the development of so-called ‘‘universal’’ alloy steel, by which they meant a steel that would have broad applications for engineering purposes. While no one alloy steel could serve all needs, the search for a universal steel led to the widespread adoption of steel alloyed with chromium and molybdenum, or ‘‘chrome-moly’’ steel. This alloy combines high strength, toughness, and is relatively easy to machine and stamp, making it the default choice for many applications.

The final major class of alloy steel to be discovered was stainless steel. The invention of stainless steel is claimed for some ten different candidates in both Europe and the U.S. in the years around 1910. These various individuals all found that high levels of chromium (12 percent or more) gave exceptional levels of corrosion resistance.

The terms ‘‘stainless’’ is a bit of an exaggeration—stainless steel alloys will corrode under extreme conditions, though at a far slower rate than other steels. It is this resistance to corrosion, combined with strength and toughness, that made stainless steels so commercially important in the twentieth century. The first commercial stainless steels were being sold by 1914 for use in cutlery and turbine blades, and by the 1920s the material was commonly used in the chemical industry for reactor vessels and piping.

Stainless steel later found widespread application in the food processing industry, particularly in dairy processing and beer making. By the end of the twentieth century, stainless steel was the most widely produced alloy steel.

After the 1920s, the development of alloys steels was largely a matter of refinement rather than of significant new discoveries. Systematic experimentation led to changes in the mix of various alloys and the substitution of one alloy for another over time. The most significant factor has been the cost and availability of alloying elements, some of which are available in limited quantities from only a few locations.

For example, wartime shortages of particular elements put pressure on researchers to develop alternatives. During World War II, metallurgists found that the addition of very small amounts of boron (as little as 0.0005 percent) allowed the reduction of other alloying elements by as much as half in a variety of low- and medium-carbon steels. This started a trend that continued after the war of attempts to minimize the use of alloying elements for cost reasons and to more exactly regulate heat treatment to produce more consistent results.

The manufacture of alloy steels changed significantly over the period 1900-1925. The widespread introduction of electrical steel making replaced the use of crucible furnaces for alloy steel processing. Electrical furnaces increased the scale of alloy steel manufacture, and allowed the easy addition of alloying elements during the steel melt.

As a result, steel produced electrically had a uniform composition and could be easily tailored to specific requirements. In particular, electric steel-making made the mass production of stainless steel possible, and the material became cheap enough in the interwar period that it could be used for large-scale applications like the production of railway cars and the cladding of the Chrysler and Empire State skyscrapers in New York.

A major refinement in steel manufacture, vacuum degassing, was introduced in the 1950s and became widespread by the 1970s. By subjecting molten steel to a strong vacuum, undesirable gases and volatile elements could be removed from the steel. This improved the quality of alloy steel, or alternatively allowed lower levels of alloy materials for the same physical properties.

As a result of manufacturing innovations, alloy steel gradually became cheaper and more widely used over the twentieth century. As early as the 1960s, the distinction between bulk and special steel became blurred, since bulk steels were being produced to more rigid standards and specialty steels were being produced in larger quantities. By the end of the twentieth century, nearly half of all steel production consisted of special steels.

Other Alloys. A variety of nonsteel alloy materials were developed during the twentieth century for particular engineering applications. The most commercially significant of these were nickel alloys and titanium alloys. Nickel alloys, particularly nickel-chromium alloys, are particularly useful in high temperature applications. Titanium alloys are light in weighs and very strong, making them useful for aviation and space applications. The application of both materials was constrained largely by cost, and in the case of titanium, processing difficulties.

Nickel-chromium alloy was significant in the development of the gas turbine engine in the 1930s. This alloy—roughly 80 percent nickel and 20 percent chromium—resists oxidation, maintains strength at high temperatures, and resists fatigue, particularly from enbrittlement. It was later found that the addition of small amounts of aluminum and titanium added strength through precipitation hardening. The primary application of these alloys later in the twentieth century was in heating elements and exhaust components such as exhaust valves and diesel glow-plugs, as well as in turbine blades in gas turbines.

Pure titanium is about as strong as steel yet nearly 50 percent lighter. When alloyed, its strength is dramatically increased, making it particularly suitable for applications where weight is critical. Titanium was discovered by the Reverend William Gregor of Cornwall, U.K., in 1791. However, the pure elemental metal was not made until 1910 by New Zealand born American metallurgist Matthew A. Hunter. The metal remained a laboratory curiosity until 1946, when William Justin Kroll of Luxembourg showed that titanium could be produced commercially by reducing titanium tetrachloride (TiCl4) with magnesium. Titanium metal production through the end of the twentieth century was based on this method.

After the World War II, U.S. Air Force studies concluded that titanium-based alloys were of potentially great importance. The emerging need for higher strength-to-weight ratios in jet aircraft structures and engines could not be satisfied efficiently by either steel or aluminum. As a result, the American government subsidized the development of the titanium industry. Once military needs were satisfied, the ready availability of the metal gave rise to opportunities in other industries, most notably chemical processing, medicine, and power generation.

Titanium’s strength-to-weight ratio and resistance to most forms of corrosion were the primary incentives for utilizing titanium in industry, replacing stainless steels, copper alloys, and other metals. The main alloy used in the aerospace industry was Titanium 6.4. It is composed of 90 percent titanium, 6 percent aluminum and 4 percent vanadium. Titanium 6.4 was developed in the 1950s and is known as aircraft-grade titanium.

Aircraft-grade titanium has a tensile strength of up to 1030 MPa and a Brinell hardness value of 330. But the low ductility of 6.4’s made it difficult to draw into tubing, so a leaner alloy called 3-2.5 (3 percent aluminum, 2.5 percent vanadium, 94.5 percent titanium) was created, which could be processed by special tube-making machinery. As a result, virtually all the titanium tubing in aircraft and aerospace consists of 3-2.5 alloy. Its use spread in the 1970s to sports products such as golf shafts, and in the 1980s to wheelchairs, ski poles, pool cues, bicycle frames, and tennis rackets.

Titanium is expensive, but not because it is rare. In fact, it is the fourth most abundant structural metallic element in the earth’s crust after aluminum, iron, and magnesium. High refining costs, high tooling costs, and the need to provide an oxygen-free atmosphere for heat-treating and annealing explain why titanium has historically been much more expensive than other structural metals.

As a result of these high costs, titanium has historically been used in applications were its low weight justified the extra expense. At the end of the twentieth century the aerospace industry continued to be the primary consumer of titanium alloys. For example, one Boeing 747 uses over 43,000 kg of titanium.

 






Date added: 2023-10-03; views: 234;


Studedu.org - Studedu - 2022-2024 year. The material is provided for informational and educational purposes. | Privacy Policy
Page generation: 0.018 sec.