-->

In Electronics

Flexible Transistors Ready for Mass Production

Literal flexibility may bring the power of a new transistor developed at UW-Madison to digital devices that bend and move. (Image courtesy of Jung-Hun Seo, University at Buffalo, State University of New York.)
 

A team of engineers has created a functional flexible transistor and with it, a fast, simple and inexpensive fabrication process that's easily scalable to the commercial level.

It's an advance that could open the door to an increasingly interconnected world, enabling manufacturers to add "smart," wireless capabilities to any number of large or small products that curve, bend, stretch and move.

The University of Wisconsin-Madison researchers' advance is a twist on a two-decade-old industry standard: a BiCMOS (bipolar complementary metal oxide semiconductor) thin-film transistor, which combines speed, high current and low power dissipation in the form of heat and wasted energy—all on one surface.

As a result, these "mixed-signal" devices (with both analog and digital capabilities) deliver both brains and brawn and are the chip of choice for many of today's portable electronic devices, including cellphones.

"The industry standard is very good," said Zhenqiang (Jack) Ma, a professor in electrical and computer engineering at UW-Madison. "Now we can do the same things with our transistor—but it can bend."

Ma and his collaborators described their advance in the inaugural issue of the journal Flexible Electronics.

Making traditional BiCMOS flexible electronics is difficult, in part because the process takes several months and requires a multitude of delicate, high-temperature steps. Even a minor variation in temperature at any point could ruin all of the previous steps.

Ma and his collaborators fabricated their flexible electronics on a single-crystal silicon nanomembrane on a single bendable piece of plastic. The secret to their success is their unique process, which eliminates many steps and slashes both the time and cost of fabricating the transistors.

"In industry, they need to finish these in three months," he said. "We finished it in a week."

He says his group's much simpler high-temperature process can scale to industry-level production right away.

"The key is that parameters are important," Ma said. "One high-temperature step fixes everything—like glue. Now, we have more powerful mixed-signal tools. Basically, the idea is for flexible electronics to expand with this. The platform is getting bigger."

For more flexible electronics news, find out how this Printed Flexible Battery Could Power Wearable Sensors.

Read More

Share Tweet Pin It +1

0 Comments

In Electronics

Silicon Transistors

 

Silicon transistors

During the 1950s, meanwhile, scientists and engineers at Bell Labs and Texas Instruments were developing advanced technologies needed to produce silicon transistors. Because of its higher melting temperature and greater reactivity, silicon was much more difficult to work with than germanium, but it offered major prospects for better performance, especially in switching applications. Germanium transistors make leaky switches; substantial leakage currents can flow when these devices are supposedly in their off state. Silicon transistors have far less leakage. In 1954 Texas Instruments produced the first commercially available silicon junction transistors and quickly dominated this new market—especially for military applications, in which their high cost was of little concern.

In the mid-1950s Bell Labs focused its transistor development efforts around new diffusion technologies, in which very narrow semiconductor layers—with thicknesses measured in microns, or millionths of a metre—are prepared by diffusing impurity atoms into the semiconductor surface from a hot gas. Inside a diffusion furnace the impurity atoms penetrate more readily into the silicon or germanium surface; their penetration depth is controlled by varying the density, temperature, and pressure of the gas as well as the processing time. (See integrated circuit: Fabricating ICs.) For the first time, diodes and transistors produced by these diffusion implantation processes functioned at frequencies above 100 megahertz (100 million cycles per second). These diffused-base transistors could be used in receivers and transmitters for FM radio and television, which operate at such high frequencies.

Another important breakthrough occurred at Bell Labs in 1955, when Carl Frosch and Link Derick developed a means of producing a glassy silicon dioxide outer layer on the silicon surface during the diffusion process. This layer offered transistor producers a promising way to protect the silicon underneath from further impurities once the diffusion process was finished and the desired electrical properties had been established.

Texas Instruments, Fairchild Semiconductor Corporation, and other companies took the lead in applying these diffusion technologies to the large-scale manufacture of transistors. At Fairchild, physicist Jean Hoerni developed the planar manufacturing process, whereby the various semiconductor layers and their sensitive interfaces are embedded beneath a protective silicon dioxide outer layer. The company was soon making and selling planar silicon transistors, largely for military applications. Led by Robert Noyce and Gordon E. Moore, Fairchild’s scientists and engineers extended this revolutionary technique to the manufacture of integrated circuits.

In the late 1950s Bell Labs researchers developed ways to use the new diffusion technologies to realize Shockley’s original 1945 idea of a field-effect transistor (FET). To do so, they had to overcome the problem of surface-state electrons, which would otherwise have blocked external electric fields from penetrating into the semiconductor. They succeeded by carefully cleaning the silicon surface and growing a very pure silicon dioxide layer on it. This approach reduced the number of surface-state electrons at the interface between the silicon and oxide layers, permitting fabrication of the first successful field-effect transistor in 1960 at Bell Labs—which, however, did not pursue its development any further.

Refinements of the FET design by other companies, especially RCA and Fairchild, resulted in the metal-oxide-semiconductor field-effect transistor (MOSFET) during the early 1960s. The key problems to be solved were the stability and reliability of these MOS transistors, which relied upon interactions occurring at or near the sensitive silicon surface rather than deep inside. The two firms began to make MOS transistors commercially available in late 1964.

In early 1963 Frank Wanlass at Fairchild developed the complementary MOS (CMOS) transistor circuit, based on a pair of MOS transistors. This approach eventually proved ideal for use in integrated circuits because of its simplicity of production and very low power dissipation during standby operation. Stability problems continued to plague MOS transistors, however, until researchers at Fairchild developed solutions in the mid-1960s. By the end of the decade, MOS transistors were beginning to displace bipolar junction transistors in microchip manufacturing. Since the late 1980s CMOS has been the technology of choice for digital applications, while bipolar transistors are now used primarily for analog and microwave devices.

CMOSA complementary metal-oxide semiconductor (CMOS) consists of a pair of semiconductors connected to a common secondary voltage such that they operate in opposite (complementary) fashion. Thus, when one transistor is turned on, the other is turned off, and vice versa.

CMOSA complementary metal-oxide semiconductor (CMOS) consists of a pair of semiconductors connected to a common secondary voltage such that they operate in opposite (complementary) fashion. Thus, when one transistor is turned on, the other is turned off, and vice versa.

CMOSA complementary metal-oxide semiconductor (CMOS) consists of a pair of semiconductors connected to a common secondary voltage such that they operate in opposite (complementary) fashion. Thus, when one transistor is turned on, the other is turned off, and vice versa.
Image Courtesy Encyclopædia Britannica, Inc.

Transistor Principles
The p-n junction :

The operation of junction transistors, as well as most other semiconductor devices, depends heavily on the behaviour of electrons and holes at the interface between two dissimilar layers, known as a p-n junction. Discovered in 1940 by Bell Labs electrochemist Russell Ohl, p-n junctions are formed by adding two different impurity elements to adjacent regions of germanium or silicon. The addition of these impurity elements is called doping. Atoms of elements from Group 15 of the periodic table (which possess five valence electrons), such as phosphorus or arsenic, contribute an electron that has no natural resting place within the crystal lattice. These excess electrons are therefore loosely bound and relatively free to roam about, acting as charge carriers that can conduct electrical current. Atoms of elements from Group 13 (which have three valence electrons), such as boron or aluminum, induce a deficit of electrons when added as impurities, effectively creating “holes” in the lattice. These positively charged quantum mechanical entities are also fairly free to roam around and conduct electricity. Under the influence of an electric field, the electrons and holes move in opposite directions. During and immediately after World War II, chemists and metallurgists at Bell Labs perfected techniques of adding impurities to high-purity silicon and germanium to induce the desired electron-rich layer (known as the n-layer) and the electron-poor layer (known as the p-layer) in these semiconductors, as described in the section Development of transistors.

electron hole: movementMovement of an electron hole in a crystal lattice.Image Courtesy Encyclopædia Britannica, Inc.

A p-n junction acts as a rectifier, similar to the old point-contact crystal rectifiers, permitting easy flow of current in only a single direction. If no voltage is applied across the junction, electrons and holes will gather on opposite sides of the interface to form a depletion layer that will act as an insulator between the two sides. A negative voltage applied to the n-layer will drive the excess electrons within it toward the interface, where they will combine with the positively charged holes attracted there by the electric field. Current will then flow easily. If instead a positive voltage is applied to the n-layer, the resulting electric field will draw electrons away from the interface, so combinations of them with holes will occur much less often. In this case current will not flow (other than tiny leakage currents). Thus, electricity will flow in only one direction through a p-n junction.

Junction transistors

Image Courtesy Encyclopædia Britannica, Inc.

Shortly after his colleagues John Bardeen and Walter H. Brattain invented their point-contact device, Bell Labs physicist William B. Shockley recognized that these rectifying characteristics might also be used in making a junction transistor. In a 1949 paper Shockley explained the physical principles behind the operation of these junctions and showed how to use them in a three-layer—n-p-n or p-n-p—device that could act as a solid-state amplifier or switch. Electric current would flow from one end to the other, with the voltage applied to the inner layer governing how much current rushed by at any given moment. In the n-p-n junction transistor, for example, electrons would flow from one n-layer through the inner p-layer to the other n-layer. Thus, a weak electrical signal applied to the inner, base layer would modulate the current flowing through the entire device. For this current to flow, some of the electrons would have to survive briefly in the presence of holes; in order to reach the second n-layer, they could not all combine with holes in the p-layer. Such bipolar operation was not at all obvious when Shockley first conceived his junction transistor. Experiments with increasingly pure crystals of silicon and germanium showed that it indeed occurred, making bipolar junction transistors possible.

cross section of an n-p-n transistorAn n-p-n transistor and its electronic symbol.Image Courtesy Encyclopædia Britannica, Inc.

MOS-type transistors

A similar principle applies to metal-oxide-semiconductor (MOS) transistors, but here it is the distance between source and drain that largely determines the operating frequency. In an n-channel MOS (NMOS) transistor, for example, the source and the drain are two n-type regions that have been established in a piece of p-type semiconductor, usually silicon. Except for the two points at which metal leads contact these regions, the entire semiconductor surface is covered by an insulating oxide layer. The metal gate, usually aluminum, is deposited atop the oxide layer just above the gap between source and drain. If there is no voltage (or a negative voltage) upon the gate, the semiconductor material beneath it will contain excess holes, and very few electrons will be able to cross the gap, because one of the two p-n junctions will block their path. Therefore, no current will flow in this configuration—other than unavoidable leakage currents. If the gate voltage is instead positive, an electric field will penetrate through the oxide layer and attract electrons into the silicon layer (often called the inversion layer) directly beneath the gate. Once this voltage exceeds a specific threshold value, electrons will begin flowing easily between source and drain. The transistor turns on.

Analogous behaviour occurs in a p-channel MOS transistor, in which the source and the drain are p-type regions formed in n-type semiconductor material. Here a negative voltage above a threshold induces a layer of holes (instead of electrons) beneath the gate and permits a current of them to flow from source to drain. For both n-channel and p-channel MOS (also called NMOS and PMOS) transistors, the operating frequency is largely governed by the speed at which the electrons or holes can drift through the semiconductor material divided by the distance from source to drain. Because electrons have mobilities through silicon that are about three times higher than holes, NMOS transistors can operate at substantially higher frequencies than PMOS transistors. Small separations between source and drain also promote high-frequency operation, and extensive efforts have been devoted to reducing this distance.

In the 1960s Frank Wanlass of Fairchild Semiconductor recognized that combinations of an NMOS and a PMOS transistor would draw extremely little current in standby operation—just the tiny, unavoidable leakage currents. These CMOS, or complementary metal-oxide-semiconductor, transistor circuits consume significant power only when the gate voltage exceeds some threshold and a current flows from source to drain. Thus, they can serve as very low-power devices, often a million times lower than the equivalent bipolar junction transistors. Together with their inherent simplicity of fabrication, this feature of CMOS transistors has made them the natural choice for manufacturing microchips, which today cram millions of transistors into a surface area smaller than a fingernail. In such cases the waste heat generated by the component’s power consumption must be kept to an absolute minimum, or the chips will simply melt.

Article Courtesy Encyclopædia Britannica, Inc.

Read More

Share Tweet Pin It +1

0 Comments

In Electronics

Vaccum tubes: Luckily, They’re Still Around...


In any modern day electrical device—from alarm clocks to phones to computers to televisions—you’ll find a device called a transistor. In fact, you’ll find billions of them. Transistors are the atoms of modern-day computing, combining to create the logic gates that enable computation. The invention of the transistor in 1947 opened the door to the information age as we know it.

But computers existed before transistors did, in a rather rudimentary form. These massive systems took up entire rooms, weighed thousands of pounds, and for all that, were nowhere near as powerful as the computers that we can fit in our pockets today.

Rather than being built out of transistors, these behemoth computers were made up of something called thermionic valves, aka vacuum tubes. These lightbulb-looking devices are now more or less obsolete (with one or two notable exceptions), but in their heyday, they were critical to the design of many electronic systems, from radios to telephones to computers. In this article, we’ll take a look at how vacuum tubes work, why they went away, and why they didn’t go away entirely.

Thermionic Emission :

The basic working principle of a vacuum tube is a phenomenon called thermionic emission. It works like this: you heat up a metal, and the thermal energy knocks some electrons loose. In 1904, English physicist John Ambrose Fleming took advantage of this effect to create the first vacuum tube device, which he called an oscillation valve.

Fleming’s device consisted of two electrodes, a cathode and an anode, placed on either end of an encapsulated glass tube.When the cathode is heated, it gives off electrons via thermionic emission. Then, by applying a positive voltage to the anode (also called the plate), these electrons are attracted to the plate and can flow across the gap. By removing the air from the tube to create a vacuum, the electrons have a clear path from the cathode to the anode, and a current is created.


This type of vacuum tube, consisting of only two electrodes, is called a diode. The term diode is still used today to refer to an electrical component that only allows an electric current to flow in one direction, although today these devices are all semiconductor based. In the case of the vacuum tube diode, a current can only flow from the anode to the cathode (though the electrons flow from the cathode to the anode, recall that the direction of conventional current is opposite to the actual movement of electrons—an annoying holdover from electrical engineering history). Diodes are commonly used for rectification, that is, converting from an alternating current (AC) to a direct current (DC).

Third Electrode’s the Charm :

While diodes are quite a handy device to have around, they did not set the limit for vacuum tube functionality. In 1907, American inventor Lee de Forest added a third electrode to the mix, creating the first triode tube. This third electrode, called the control grid, enabled the vacuum tube to be used not just as a rectifier, but as an amplifier of electrical signals.

The control grid is placed between the cathode and anode, and is in the shape of a mesh (the holes allow electrons to pass through it). By adjusting the voltage applied to the grid, you can control the number of electrons flowing from the cathode to the anode. If the grid is given a strong negative voltage, it repels the electrons from the cathode and chokes the flow of current. The more you increase the grid voltage, the more electrons can pass through it, and the higher your current. In this way, the triode can serve as an on/off switch for an electrical current, as well as a signal amplifier.

The triode is useful for amplifying signals because a small change in the control grid voltage leads to a large change in the plate current. In this way, a small signal at the grid (like a radio wave) can be converted into a much larger signal, with the same exact waveform, at the plate. Note that you could also increase the plate current by increasing the plate voltage, but you’d have to change it by a greater amount than the grid voltage to achieve the same amplification of current.

But why stop at three electrodes when you could have four? Or five, for that matter? Further enhancements of vacuum tubes placed an additional grid (called the screen grid) and yet another (called the suppressor grid) even closer to the anode, creating a type of vacuum tube called a tetrode and a pentode , respectively. These extra grids solve some stability problems and address other limitations with the triode design, but the function remains largely the same.

The Transistor is Born, but Tubes live on...
A replica of the first transistor created in 1947

In 1947, the trio of physicists William Shockley, Walter Brattain and John Bardeen created the world’s first transistor, and marked the beginning of the end for the vacuum tube. The transistor could replicate all the functions of tubes, like switching and amplification, but was made out of semiconductor materials.

Once the transistor cat was let out of the bag, vacuum tubes were on their way to extinction in all but the most specific of applications. Transistors are much more durable (vacuum tubes, like light bulbs, will eventually need to be replaced), much smaller (imagine fitting 2 billion tubes inside an iPhone), and require much less voltage than tubes in order to function (for one thing, transistors don’t have a filament that needs heating).

Despite the emergence of the transistor, vacuum tubes aren’t completely extinct, and they remain useful in a handful of niche applications . For example, vacuum tubes are still used in high power RF transmitters, as they can generate more power than modern semiconductor equivalents. For this reason, you’ll find vacuum tubes in particle accelerators, MRI scanners, and even microwave ovens.

But perhaps the most charming modern application of vacuum tubes is in the musical community. Audiophiles swear by the quality of vacuum tube amplifiers, preferring their sound to semiconductor amps, and many professional musicians won’t consider using anything in their place. Whether there’s any merit to this preference is a matter of some debate, but you can dive more into the fascinating world of tube sound in this thorough IEEE Spectrum article


Read More

Share Tweet Pin It +1

0 Comments

In Electronics

5G : It's All About the Antennas


For 5G technology to function as expected in apps from factory automation to self-driving vehicles, multiple antennas must be properly implemented.

This article appeared in Microwaves & RF and has been published.

When studying 5G NR operation, it’s not immediately obvious that 5G meets all of the objectives of the 3GPP standard by using advanced antenna technology. Antennas are often overlooked and treated with indifference. After all, antennas are just that nuisance metal thing that you have to put on a radio to make it work. In the case of 5G, antennas play a major role in achieving the expected features and performance.

The primary objectives of 5G NR are:

  • eMBB: Enhanced mobile broadband (eMBB) means more subscriber capacity and higher data rates. Increase subscriber capacity by at least a factor of 1,000 over LTE and boost downlink (DL) data rate to 10 Gb/s with a minimum of 100 Mb/s for every subscriber.
  • mMTC: Massive machine-type communications (mMTC) effectively means the Internet of Things (IoT). The 5G standard meets the needs of low power consumption, low cost, and low data rate generally associated with IoT to wirelessly connect millions of different things to the internet.
  • uRLLC: Ultra-reliable low-latency communications (uRLLC). Latency is the time delay between an initiating action and the time the action occurs. In many wireless systems, this delay is harmful or otherwise a knock-out factor.  







Factory automation with robots, advanced driver-assistance systems (ADAS) that improves safety in new vehicles, and, ultimately, self-driving cars or trucks all rely on low latency. The new 5G standard claims a latency of 1ms or less, which should satisfy these needs.

New 5G systems are proving that these objectives can be met, especially eMBB, by using multiple antenna methods:

MIMO

The first antenna technology that leads to the highly desirable features of 5G is multiple-input, multiple-output (MIMO). MIMO uses multiple antennas plus their transceivers. The serial data to be transmitted is divided into multiple data streams, each of which modulates an individual carrier. All such signals are then transmitted simultaneously over the same bandwidth.

Because the antennas for each channel are adequately spaced, each signal will travel a slightly different path. This ensures that fading and other problems are greatly minimized, thereby improving link reliability and leading to fewer dropped calls and texts.

MIMO systems define the number of antennas and paths. For example, there may be four transmitting antennas and four receive antennas expressed as 4 × 4 or 4T4R. A variety of MIMO configurations can be built; 5G can define up to 8T8R.

In addition to improving link reliability, the multiple streams can boost data rate by a factor of N, where N is the number of transmit antennas. Data rates to several gigabits per second are possible. Add to that the wider channels of 40, 80, and 160 MHz plus carrier aggregation and modulation up to 64QAM and the data rate can soar.



Beamforming :

The other antenna technology that makes 5G work is agile beamforming. This is the process of using special antennas to produce very narrow beams that can be rotated to point in a desired direction. This technology is more likely to be used in the millimeter-wave (mmWave) bands.

The highly focused beams indicate that the signal has been concentrated or focused, which means the effective radiated power has been boosted. Stronger narrow beams will travel farther and sometimes penetrate buildings and other obstacles more effectively. And the ability to position beams over a wide angle makes it possible to minimize or null out strong interfering signals

The technology that provides this capability is phased arrays. Phased arrays are panels of many small antenna elements, each with its one TX and RX plus gain control and phase variation. By adjusting the amplitude and phase out of each antenna, the signals from each antenna are summed so that they add or subtract (interfere), allowing for the generation of multiple beam sizes that can be pointed in a desired direction

Phased arrays have been used for years in the military for radar. Some are as big as a building, with others mounted on the front of a ship or in the nose of a plane. They were and are expensive. Now you can buy a phased array on a chip. Phased arrays boost signal power, thereby extending the transmission range and helping the signals go deeper into buildings.

Oh yes, the phased arrays can be used as MIMO antennas. This makes it possible to implement multi-user MIMO, a variant of plain-old MIMO that lets one antenna be partitioned into smaller antennas, each group dedicated to one of the many users accessing the cell site.


Smartphone Antennas

We don’t think about antennas when we’re buying or using a smartphone. Yet they’re more important than you think. The typical smartphone has maybe a half-dozen antennas. At least two are for the lower and upper cellular frequencies. But with 5G, MIMO must be added. To do so would require two each for the upper and lower bands. One popular combination is 4 × 2 (or 4T2R).

That means many, if not most, new 5G phones will have four antennas for the cellular bands. These antennas will likely have some automatic antenna-tuning capability.

Also in the mix are one or two antennas for Wi-Fi and Bluetooth. Since both wireless technologies operate at 2.4 GHz, it’s possible to share an antenna. Then there’s a GPS receiver antenna. If you know a little about antennas, you probably know that each individual antenna should be spaced as far away as possible from the others to avoid interaction. That’s tough to do in a small handset. Thankfully, the operating frequencies are high and the wavelength and antennas short.

One more thing. If we’re talking about a 5G mmWave phone, you will need another antenna. With 5G on the lower cellular bands, the regular smartphone antennas will work. But if you have a mmWave version of 5G, you’ll have a small phased array inside the handset

I almost forgot the NFC antenna. The near-field communications radio operates on 13.56 MHz. Its antenna is usually a small coil. It too eats up lots of space inside the phone. NFC use is increasing, and it could see future new applications now that the standard has implemented two-way data exchanges.

So, it really is all about the antennas these days. Can’t do without them.

Read More

Share Tweet Pin It +1

0 Comments

In Electrical Machines Electronics

Faraday's law of Induction and Lenz Law


Faraday’s experiments shows that
The emf induced by a change in magnetic flux depends on only a few factors. First, emf is directly proportional to the change in flux ΔΦ.
Secondly, emf is greatest when the change in time Δt is smallest—that is, emf is inversely proportional to Δt. Finally, if a coil has N turns, an emf will be produced that is N times greater than for a single coil, so that emf is directly proportional to N.
The equation for the emf induced by a change in magnetic flux is

emf − N•(ΔΦ/Δt) 

This relationship is known as Faraday’s Law of Induction. The units for emf are volts, as is usual. The minus sign in Faraday’s law of induction is very important. The minus means that the emf creates a current I and magnetic field B that oppose the change in flux ΔΦ—this is known as Lenz’s law. The direction (given by the minus sign) of the emfis so important that it is called Lenz’s law.
The Russian Heinrich Lenz (1804–1865), who, like Faraday and Henry,independently investigated aspects of induction. Faraday was aware of the direction, but Lenz stated it so clearly that he is credited for its discovery...


From the above figure
(A) this bar magnet is thrust into the coil, the strength of the magnetic field increases in the coil. The current induced in the coil creates another field, in the opposite direction of the bar magnet’s to oppose the increase. This is one aspect of Lenz’s law—induction opposes any change in flux. 
(B) and (C) are two other situations. Verify for yourself that the direction of the induced Bcoil shown indeed opposes the change in flux and that the current direction shown is consistent with Right Hand Rule.

Read More

Share Tweet Pin It +1

0 Comments

In Electrical Machines Electronics

Introduction to Electrical Machines



What is an Electrical Machines ?
An electrical machine is a device which converts mechanical energy into electrical energy or vice versa. Electric Machines also include transformers, which do conversion between "mechanical and electrical form but they convert AC current from one voltage level to another voltage level".
                 Electric machines were developed beginning in the mid 19th century.E machines, in the form of generators, produce all electric power on Earth, and in the form of electric motors consume approximately 60-70% of all electric power produced. Developing more efficient electric machine technology is crucial effort for conservation andgreen energy.


Electric Machines is further Classified into  
A.Static Machines.
     a.Transformers.
B.Dynamic Machines.
     a.Generator.
          i.Ac Generator.
         ii.Dc Generator.
      b.Motors.
          i.Ac Motors.
         ii.Dc Motors.

Read More

Share Tweet Pin It +1

0 Comments

In Amplifiers Electronics

Introduction to Amplifiers


Amplifier is used to increase the amplitude of a signal waveform, without changing other parameters of the waveform such as frequency or wave shape. They are one of the most commonly used circuits in electronics and perform a variety of functions in a great many electronic systems.

In "Electronics” small signal amplifiers are commonly used devices as they have the ability to amplify a relatively small input signal, for example from a Sensor such as a photo-device, into a much larger output signal to drive a relay, lamp or loudspeaker for example.
There are many forms of electronic circuits classed as amplifiers, from Operational Amplifiers and Small Signal Amplifiers up to Large Signal and Power Amplifiers. The classification of an amplifier depends upon the size of the signal, large or small, its physical configuration and how it processes the input signal, that is the relationship between input signal and current flowing in the load.
The type or classification of an Amplifier is given in the following table.

Classification of Signal System :


Amplifiers can be thought of as a simple box or block containing the amplifying device, such as a Bipolar Transistor, Field Effect Transistor or Operational Amplifier, which has two input terminals and two output terminals (ground being common) with the output signal being much greater than that of the input signal as it has been “Amplified”.

An ideal signal amplifier will have three main properties:  Input Resistance or (RIN), Output Resistance or (ROUT) and of course amplification known commonly as Gain or (A). No matter how complicated an amplifier circuit is, a general amplifier model can still be used to show the relationship of these three properties.

Ideal Amlifier Model :



Amplifier gain is simply the ratio of the output divided-by the input. Gain has no units as its a ratio, but in Electronics it is commonly given the symbol “A”, for Amplification. Then the gain of an amplifier is simply calculated as the “output signal divided by the input signal”.


Amplifier Gain :

The introduction to the amplifier gain can be said to be the relationship that exists between the signal measured at the output with the signal measured at the input. There are three different kinds of amplifier gain which can be measured and these are: Voltage Gain ( Av ), Current Gain ( Ai ) and Power Gain ( Ap ) depending upon the quantity being measured with examples of these different types of gains are given below.


Voltage Amplifier Gain :










Current Amplifier Gain :








Power Amplifier Gain :










The power gain (Ap) or power level of the amplifier can also be expressed in Decibels, (dB). The Bel (B) is a logarithmic unit (base 10) of measurement that has no units. Since the Bel is too large a unit of measure, it is prefixed with deci making it Decibels instead with one decibel being one tenth (1/10th) of a Bel. To calculate the gain of the amplifier in Decibels or dB, we can use the following expressions.








  • Voltage Gain in dB:   av  =  20•log(Av)
  • Current Gain in dB:   ai  =  20•log(Ai)
  •  Power Gain in dB:   ap  =  10•log(Ap)

Read More

Share Tweet Pin It +1

0 Comments