Tuesday 24 June 2014

Electronics

electronics, branch of physics and electrical engineering that deals with the emission, behaviour, and effects of electrons and with electronic devices.
Electronics encompasses an exceptionally broad range oftechnology. The term originally was applied to the study ofelectron behaviour and movement, particularly as observed in the first electron tubes. It came to be used in its broader sense with advances in knowledge about the fundamental nature of electrons and about the way in which the motion of these particles could be utilized. Today many scientific and technical disciplines deal with different aspects of electronics. Research in these fields has led to the development of such key devices as transistorsintegrated circuitslasers, and optical fibres. These in turn have made it possible to manufacture a wide array of electronic consumer, industrial, and military products. Indeed, it can be said that the world is in the midst of an electronic revolution at least as significant as the industrial revolution of the 19th century.

The history of electronics


The vacuum tube era

Theoretical and experimental studies of electricity during the 18th and 19th centuries led to the development of the first electrical machines and the beginning of the widespread use of electricity. The history of electronics began to evolve separately from that of electricity late in the 19th century with the identification of the electron by the English physicist Sir Joseph John Thomson and the measurement of its electric charge by the American physicist Robert A. Millikan in 1909.
At the time of Thomson’s work, the American inventor Thomas A. Edison had observed a bluish glow in some of his early lightbulbs under certain conditions and found that a current would flow from one electrode in the lamp to another if the second one (anode) were made positively charged with respect to the first (cathode). Work by Thomson and his students and by the English engineer John Ambrose Fleming revealed that this so-called Edison effect was the result of the emission of electrons from the cathode, the hot filament in the lamp. The motion of the electrons to the anode, a metal plate, constituted an electric current that would not exist if the anode were negatively charged.
This discovery provided impetus for the development of electron tubes, including an improved X-ray tube by the American engineer William D. Coolidge and Fleming’s thermionic valve (a two-electrode vacuum tube) for use in radio receivers. The detection of a radio signal, which is a very high-frequency alternating current (AC), requires that the signal be rectified; i.e., the alternating current must be converted into a direct current (DC) by a device that conducts only when the signal has one polarity but not when it has the other—precisely what Fleming’s valve (patented in 1904) did. Previously, radio signals were detected by various empirically developed devices such as the “cat whisker” detector, which was composed of a fine wire (the whisker) in delicate contact with the surface of a natural crystal of lead sulfide (galena) or some other semiconductor material. These devices were undependable, lacked sufficient sensitivity, and required constant adjustment of the whisker-to-crystal contact to produce the desired result. Yet these were the forerunners of today’s solid-state devices. The fact that crystal rectifiers worked at all encouraged scientists to continue studying them and gradually to obtain the fundamental understanding of the electrical properties of semiconducting materials necessary to permit the invention of the transistor.
In 1906 Lee De Forest, an American engineer, developed a type of vacuum tube that was capable of amplifying radio signals. De Forest added a grid of fine wire between the cathode and anode of the two-electrode thermionic valve constructed by Fleming. The new device, which De Forest dubbed theAudion (patented in 1907), was thus a three-electrode vacuum tube. In operation, the anode in such a vacuum tube is given a positive potential (positively biased) with respect to the cathode, while the grid is negatively biased. A large negative bias on the grid prevents any electrons emitted from the cathode from reaching the anode; however, because the grid is largely open space, a less negative bias permits some electrons to pass through it and reach the anode. Small variations in the grid potential can thus control large amounts of anode current.
The vacuum tube permitted the development of radio broadcasting, long-distance telephony, television, and the first electronic digital computers. These early electronic computers were, in fact, the largest vacuum-tube systems ever built. Perhaps the best-known representative is the ENIAC(Electronic Numerical Integrator and Computer), completed in 1946.
The special requirements of the many different applications of vacuum tubes led to numerous improvements, enabling them to handle large amounts of power, operate at very high frequencies, have greater than average reliability, or be made very compact (the size of a thimble). The cathode-ray tube, originally developed for displaying electrical waveforms on a screen for engineering measurements, evolved into the television picture tube. Such tubes operate by forming the electrons emitted from the cathode into a thin beam that impinges on a fluorescent screen at the end of the tube. The screen emits light that can be viewed from outside the tube. Deflecting the electron beamcauses patterns of light to be produced on the screen, creating the desired optical images.
Other specialized types of vacuum tubes, developed or refined during World War II for military purposes, are still used today in microwave ovens and as extremely high-frequency transmitters aboard space satellites. Notwithstanding the remarkable success of solid-state devices in most electronic applications, there are certain specialized functions that only vacuum tubes can perform. These usually involve operation at extremes of power or frequency. Vacuum tubes continue to be used as display devices for television sets and computer monitors because other means of providing the function are more expensive, though even this situation is changing.
Vacuum tubes are fragile and ultimately wear out in service. Failure occurs in normal usage either from the effects of repeated heating and cooling as equipment is switched on and off (thermal fatigue), which ultimately causes a physical fracture in some part of the interior structure of the tube, or from degradation of the properties of the cathode by residual gases in the tube. Vacuum tubes also take time (from a few seconds to several minutes) to “warm up” to operating temperature—an inconvenience at best and in some cases a serious limitation to their use. These shortcomings motivated scientists at Bell Laboratories to seek an alternative to the vacuum tube and led to the development of the transistor.

The semiconductor revolution

INVENTION OF THE TRANSISTOR

The invention of the transistor in 1947 by John BardeenWalter H. Brattain, and William B. Shockley of the Bell research staff provided the first of a series of new devices with remarkable potential for expanding the utility of electronic equipment (see photograph). Transistors, along with such subsequent developments as integrated circuits, are made of crystalline solid materials called semiconductors, which have electrical properties that can be varied over an extremely wide range by the addition of minuscule quantities of other elements. The electric current in semiconductors is carried by electrons, which have a negative charge, and also by “holes,” analogous entities that carry a positive charge. The availability of two kinds of charge carriers in semiconductors is a valuable property exploited in many electronic devices made of such materials.
Early transistors were produced using germanium as the semiconductor material, because methods of purifying it to the required degree had been developed during and shortly after World War II. Because the electrical properties of semiconductors are extremely sensitive to the slightest trace of certain other elements, only about one part per billion of such elements can be tolerated in material to be used for making semiconductor devices.
During the late 1950's, research on the purification of silicon succeeded in producing material suitable for semiconductor devices, and new devices made of silicon were manufactured from about 1960. Silicon quickly became the preferred raw material, because it is much more abundant than germanium and thus less expensive. In addition, silicon retains its semiconducting properties at higher temperatures than does germanium. Silicon diodes can be operated at temperatures up to 200 °C (400 °F), whereas germanium diodes cannot be operated above 85 °C (185 °F). There was one other important property of silicon, not appreciated at the time but crucial to the development of low-cost transistors and integrated circuits: silicon, unlike germanium, forms a tenaciously adhering oxide film with excellent electrical insulating properties when it is heated to high temperatures in the presence of oxygen. This film is utilized as a mask to permit the desired impurities that modify the electrical properties of silicon to be introduced into it during manufacture of semiconductor devices. The mask pattern, formed by a photo lithographic process, permits the creation of tiny transistors and other electronic components in the silicon.

INTEGRATED CIRCUITS

By 1960 vacuum tubes were rapidly being supplanted by transistors, because the latter had become less expensive, did not burn out in service, and were much smaller and more reliable. Computers employed hundreds of thousands of transistors each. This fact, together with the need for compact, lightweight electronic missile-guidance systems, led to the invention of the integrated circuit (IC) independently by Jack Kilby of Texas Instruments Incorporated in 1958 and by Jean Hoerni and Robert Noyce of Fairchild Semiconductor Corporation in 1959. Kilby is usually credited with having developed the concept of integrating device and circuit elements onto a single silicon chip, while Noyce is given credit for having conceived the method for integrating the separate elements.
Early ICs contained about 10 individual components on a silicon chip 3 mm (0.12 inch) square. By 1970 the number was up to 1,000 on a chip of the same size at no increase in cost. Late in the following year the first microprocessor was introduced. The device contained all the arithmetic, logic, and control circuitry required to perform the functions of a computer’s central processing unit (CPU). This type of large-scale IC was developed by a team at Intel Corporation, the same company that also introduced the memory IC in 1971. The stage was now set for the computerization of small electronic equipment.
Until the microprocessor appeared on the scene, computers were essentially discrete pieces of equipment used primarily for data processing and scientific calculations. They ranged in size fromminicomputers, comparable in dimensions to a small filing cabinet, to mainframe systems that could fill a large room. The microprocessor enabled computer engineers to develop microcomputers—systems about the size of a lunch box or smaller but with enough computing power to perform many kinds of business, industrial, and scientific tasks. Such systems made it possible to control a host of small instruments or devices (e.g., numerically controlled lathes and one-armed robotic devices for spot welding) by using standard components programmed to do a specific job. The very existence of computer hardware inside such devices is not apparent to the user.
The large demand for microprocessors generated by these initial applications led to high-volume production and a dramatic reduction in cost. This in turn promoted the use of the devices in many other applications—for example, in household appliances and automobiles, for which electronic controls had previously been too expensive to consider. Continued advances in IC technology gave rise to very large-scale integration (VLSI), which substantially increased the circuit density of microprocessors. These technological advances, coupled with further cost reductions stemming from improved manufacturing methods, made feasible the mass production of personal computers for use in offices, schools, and homes.
By the mid-1980s inexpensive microprocessors had stimulated computerization of an enormous variety of consumer products. Common examples included programmable microwave ovens and thermostats, clothes washers and dryers, self-tuning television sets and self-focusing cameras, videocassette recorders and video games, telephones and answering machines, musical instruments, watches, and security systems. Microelectronics also came to the fore in business,industry, government, and other sectors. Microprocessor-based equipment proliferated, ranging from automatic teller machines (ATMs) and point-of-sale terminals in retail stores to automated factory assembly systems and office workstations.
By mid-1986 memory ICs with a capacity of 262,144 bits (binary digits) were available. In fact, Gordon E. Moore, one of the founders of Intel, observed as early as 1965 that the complexity of ICs was approximately doubling every 18–24 months, which was still the case in 2000. This empirical “Moore’s law” is widely used in forecasting the technological requirements for manufacturing future ICs (seefigure).

COMPOUND SEMICONDUCTOR MATERIALS

Many semiconductor materials other than silicon and germanium exist, and they have different useful properties. Silicon carbide is a compound semiconductor, the only one composed of two elements from column IV of the periodic table. It is particularly suited for making devices for specialized high-temperature applications. Other compounds formed by combining elements from column III of the periodic table—such as aluminum, gallium, and indium—with elements from column V—such as phosphorus, arsenic, and antimony—are of particular interest. These so-called III-V compounds are used to make semiconductor devices that emit light efficiently or that operate at exceptionally high frequencies.
A remarkable characteristic of these compounds is that they can, in effect, be mixed together. One can produce gallium arsenide or substitute aluminum for some of the gallium or also substitute phosphorus for some of the arsenic. When this is done, the electrical and optical properties of the material are subtly changed in a continuous fashion in proportion to the amount of aluminum or phosphorus used.
Except for silicon carbide, these compounds have the same crystal structure. This makes possible the gradation of composition, and thus the properties, of the semiconductor material within one continuous crystalline body. Modern material-processing techniques allow these compositional changes to be controlled accurately on an atomic scale.
These characteristics are exploited in making semiconductor lasers that produce light of any given wavelength within a considerable range. Such lasers are used, for example, in compact disc players and as light sources for optical fibre communication.

Digital electronics

Optoelectronics


No comments:

Post a Comment