1 – Linear and Nonlinear Circuits




1 Linear and Nonlinear Circuits



This chapter has a two-fold objective. First, it introduces the nomenclature that will be used throughout the book. Second, it presents the basic mathematical theory necessary to describe nonlinear systems, which will help the reader to understand their rich set of behaviors. This will clarify several important distinctions between linear and nonlinear circuits and their mathematical representations.


We shall start with a brief review of linearity and linear systems, their main properties and underlying assumptions. A reader familiarized with the linear system realm can understand the limitations of the theoretical abstraction framed in the linearity mathematical concept, realizing its validity borders and so be prepared to cross them, i.e., to enter the natural world of nonlinearity. We will then introduce nonlinear systems and the responses that we should expect from them. After this, we will study one static, or memoryless, nonlinearity and a dynamic one, i.e., one that exhibits memory. This will then establish the foundations of nonlinear static and dynamic models and their basic extraction procedures.


The chapter is presented as follows: Section 1.1 is devoted to nomenclature and Section 1.2 reviews linear system theory. Sections 1.3 and 1.4 illustrate the types of behaviors found in general nonlinear systems and, in particular, in nonlinear RF and microwave circuits. Then, Sections 1.5 and 1.6 present the theory of nonlinear static and dynamic systems that will be useful to understand the nonlinear circuit simulation algorithms treated in Chapter 2 and the device modeling techniques of Chapters 36. Mathematics of nonlinear systems, and in particular dynamic ones, is not easy or trivial. So, we urge you to not feel discouraged if you do not understand it after your first read. What you will find in the next chapters will certainly help provide a physical meaning and practical usefulness to most of these sometimes abstract mathematical formulations. Finally, Section 1.7 closes this chapter with a brief conclusion.



1.1 Basic Definitions


We will frequently use the notion of model and system, so it is convenient to first identify these concepts.



1.1.1 Model


A model is a mathematical description, or representation, of a set of particular features of a physical entity that combines the observable (i.e., measurable) magnitudes and our previous knowledge about that entity. Models enable the simulation of a physical entity and so allow a better understanding of its observed behavior and provide predictions of behaviors not yet observed. As models are simplifications of the physically observable, they are, by definition, an approximation and restricted to represent a subset of all possible behaviors of the physical device.



1.1.2 System


As depicted in Figure 1.1, a system is a model of a machine or mechanism that transforms an input (excitation, or stimulus, usually assumed as a function of time), x(t), into an output (or response, also varying in time), y(t). Mathematically, it is defined as the following operator: y(t) = S[x(t)], in which x(t) and y(t) are, themselves, mathematical representations of the input and output measurable signals, respectively. Please note that, contrary to ordinary mathematical functions, which operate on numbers (i.e., that for a given input number, x, they respond with an output number, y = f(x)), mathematical operators map functions, such as x(t), onto other functions, y(t). So, they are also known as mathematical function maps. And, similar to what is required for functions, a particular input must be mapped onto a particular, unique, output.





Figure 1.1 Illustration of the system concept.


When the operator is such that its response at a particular instant of time, y(t0), is only dependent on that particular input instant, x(t0), i.e., the system transforms each input value onto the corresponding output value, the operator is reduced to a function and the system is said to be static or memoryless. When, on the other hand, the system output cannot be uniquely determined from the instantaneous input only but depends on x(t0) and its x(t) past and future values, x(t ± τ), i.e., the system is now an operator of the whole x(t) onto y(t), we say that the system is dynamic or that it exhibits memory. (In practice, real systems cannot depend on future values because they must be causal.) For example, resistive networks are static systems, whereas networks that include energy storage elements (memory), such as capacitors, inductors or transmission lines, are dynamic.


Defined this way, this notion of a system can be used as a representation, or model, of any physical device, which can either be an individual component, a circuit or a set of circuit blocks. An interesting feature of this definition is that a system is nestable, i.e., it is such that a block (circuit) made of interconnected individual systems (circuit elements or components) can still be treated as a system. So, we will use this concept of system whenever we want to refer to the properties that we normally observe in components or circuits.



1.1.3 Time Invariance


Although the system response, y(t), varies in time, that does not necessarily mean that the system varies in time. The change in time of the response can be only a direct consequence of the input variation with time. This time-invariance of the operator is expressed by stating that the system reacts exactly in the same way regardless at which time it is subjected to the same input. That is, if the response to x(t) is y(t) = S[x(t)], and another test is made after a certain amount of time, τ, then the response will be exactly the same as before, except that now it will be naturally delayed by that same amount of time y(t − τ) = S[x(t − τ)]. This defines a time-invariant system. If, on the other hand, y(t − τ) ≠ S[x(t − τ)], then the system is said to be time-variant.


The vast majority of physical systems, and thus of electronic circuits, are time-invariant. Therefore, we will assume that all systems referred to in this and succeeding chapters are time-invariant unless otherwise explicitly stated.


After finalizing the study of this chapter, the reader may try Exercise 1.5 which constitutes a good example of how we can make use of this time-variance property for enabling us to treat, as a much simpler linear time-variant system, a modulator that is inherently nonlinear and time-invariant.



1.2 Linearity and the Separation of Effects


Now we will define a linear system as one that obeys superposition and recall how we use this property to determine the response of a linear system to a general excitation.



1.2.1 Superposition


A system is said to be linear if it obeys the principle of superposition, i.e., if it shares the properties of additivity and homogeneity.


The additivity property means that if y1(t) is the system response to x1(t), y1(t) = S[x1(t)], y2(t) is the system’s response to x2(t), y2(t) = S[x2(t)], and yT(t) is the response to x1(t) + x2(t), then



yT(t) = S[x1(t) + x2(t)] = S[x1(t)] + S[x2(t)] = y1(t) + y2(t)
yTt=Sx1t+x2t=Sx1t+Sx2t=y1t+y2t
(1.1)

The additivity property is the mathematical statement that affirms that a linear system reacts to an additive composition of stimuli as an additive composition of responses, as if the system could distinguish each of the stimuli and treat them separately. In practical terms, this would mean that, if, in the lab, the result of an experiment with a cause x1(t) would produce an effect y1(t), and another, independent, experiment, on another cause x2(t), would produce y2(t), then, a third experiment, now made on a third stimulus x1(t) + x2(t), would produce a response that is the numerical summation of the two previously obtained effects y1(t) + y2(t).


On the other hand, the homogeneity property means that if α is a constant, then the response to αx(t) will be αy(t), i.e.,



S[αx(t)] = αS[x(t)] = αy(t)
Sαxt=αSxt=αyt
(1.2)

The homogeneity property is the mathematical description of proportionality that says that an α times larger cause produces an α times larger effect. However, it does not necessarily state that the effects are proportional to their corresponding causes. For example, although the current and the voltage in a constant (linear) capacitance obey the homogeneity principle, they are not proportional to each other. In fact, since the current in a capacitor is given by (1.3), the current to a twice as large vc(t) will be twice as large as ic(t). However, that does not mean that ic(t) is proportional to vc(t), as can be readily noticed when vc(t) is a ramp in time and ic(t) is a constant.


ict=Cdvctdt(1.3)

In summary, linear systems obey the principle of superposition,


Sα1x1t+α2x2t=Sα1x1t+Sα2x2t=α1Sx1t+α2Sx2t=α1y1t+α2y2t(1.4)


1.2.2 Response of a Linear System to a General Excitation


Superposition has very useful consequences that we now briefly review. They all revolve around that idea of the separation of effects, whereby we can expand any previously untested stimulus into a summation of previously tested excitations, making general predictions about the system responses.



1.2.2.1 Linear Response in the Time Domain

In the time domain, this means that, if we represent any input, x(t), as composed of the succession of its time samples, taken at regular intervals, Ts, of a constant sampling frequency fs = 1/Ts, so that they asymptotically produce the same effect of x(t), x(nTs)Ts,


xt≈∑n=−NNxnTsTsδt−nTs(1.5)

in which δ(t − nTs)δt−nTs is the Dirac delta, or impulse, function centered at nTs, where n is the number of samples, (see Figure 1.2(a)), and we know the response of the system to one of these impulse functions of unity amplitude,  h(t) = S[δ(t)]ht=Sδt (see Figure 1.2(b)), then we can readily predict the response to any arbitrary input x(t) as


yt≡Sxt≈S∑n=−NNxnTsTsδt−nTs=∑n=−NNSxnTsTsδt−nTs=∑n=−NNxnTsTsSδt−nTs=∑n=−NNxnTsht−nTsTs(1.6)

by simply making use of the additivity and homogeneity properties (as shown in Figure 1.2(c)). Expression (1.6) is exact in the limit when the sampling interval, Ts, tends to zero and N tends to infinity, becoming the well-known convolution integral:


yt≡Sxt=∫−∞∞xτht−τdτ=∫−∞∞hτxt−τdτ(1.7)




Figure 1.2. Response, y(t)yt, of a linear dynamic and time-invariant system to an arbitrary input, x(t)xt, when this stimulus is expanded in a summation of Dirac delta functions. (a) Input expansion with the base of delayed Dirac delta functions x(n) = x(nTs)δ(t − nTs)xn=xnTsδt−nTs. (b) Impulse response of the system, h(t) = S[δ(t)]ht=Sδt. (c) Response of the system to x(t)xt, y(t) = S[x(t)]yt=Sxt.



1.2.2.2 Linear Response in the Frequency Domain

So, in time domain, we only needed to know the system response to one input basis function – the impulse response, h(t) = S[δ(t)]ht=Sδt, to be able to predict the response to any other arbitrary input. Similarly, in the frequency domain we only need to know the response to one input basis function, the cosine, although tested at all frequencies, to predict the response to any arbitrary periodic input.


Actually, since the cosine can be given as the additive combination of two complex exponentials


Acosωt=Aejωt+e−jωt2(1.8)

from a mathematical viewpoint, we only need to know the response to that basic complex exponential. This response can be obtained from (1.7) as


∫−∞∞hτejωt−τdτ=ejωt∫−∞∞hτe−jωτdτ=Hωejωt(1.9)

in which H(ω) is the Fourier transform of h(τ). This is an interesting result that tells us that the response to an arbitrary x(t) can be easily computed by summing up the Fourier components of that input scaled by the system’s response to each particular frequency. Indeed, if R(ω) is the frequency-domain Fourier representation of a time-domain signal r(t), so that


Rω=∫−∞∞rte−jωtdt(1.10a)

and


rt=12pi∫−∞∞Rωejωtdω(1.10b)

then, the substitution of (1.10) into (1.7) would lead to



Y(ω) = H(ω)X(ω)
Yω=HωXω
(1.11)

where Y(ω) can be related to y(t) – as X(ω) is related to x(t) – by the Fourier transform of (1.10). This expression tells us the following two important things.


First, the time-domain convolution of (1.7) between the input, x(t), and the impulse response, h(τ), becomes the product of the frequency-domain representation of these two entities, X(ω) and H(ω), respectively.


Second, the response of a linear time-invariant system to a continuous-wave (CW) signal (an unmodulated carrier of frequency ωω, specifically  cos (ωt)cosωt) is another CW signal of the same frequency with, possibly, different amplitude and phase. Consequently, the response to a signal of complex spectrum will only have frequency-domain components at the frequencies already present at the input. A time-invariant linear system is incapable of generating new frequency components or of performing any qualitative transformation of the input spectrum.


Finally, equation (1.11) tells us that, in the same way we only needed to know the system’s impulse response to be able to predict the response to any arbitrary stimulus in the time domain, we just need to know H(ω) to predict the response to any arbitrary periodic input described in the frequency domain. As an illustration, Figure 1.3 depicts the measured transfer function S21(ω), in amplitude and phase, of a microwave filter.





Figure 1.3 Example of the frequency-domain transfer function of a linear RF circuit, H(ω): measured forward gain, S21(ω), in amplitude – (a) and phase – (b), of a microwave filter.



1.3 Nonlinearity: The Lack of Superposition


As all of us have been extensively taught and trained in working with linear systems, and with the additivity and homogeneity properties being so intuitive, we may easily fall into the trap of believing that these should be properties naturally inherent to all physical systems. But this is not the case. In fact, most of macroscopic physical systems behave very differently from linear systems, i.e., they are not linear. Actually, we use the term nonlinear systems to identify them.


Since we have been making the effort to define all important concepts used so far, we should start by defining a nonlinear system. But that is not a straightforward task as there is no general definition for these systems. There is only the unsatisfying definition of defining something by what it is not: a nonlinear system is one that is not linear, i.e., a nonlinear system is one that does not obey the principle of superposition. This is an intriguing, but also revealing, situation, which tells us that if linear systems are the ones that obey a precise mathematical principle, nonlinear systems are all the other ones. Hence, from an engineering standpoint the relevant question to be answered is: Are nonlinear systems often seen, or used, in practice? To demonstrate their importance, let us try a couple of very common, RF electronic examples. But, before these, the reader may want to try the two simpler examples discussed in Exercises 1.11.4.




Example 1.1 Active Devices and Amplifiers


In this example we will show that any active device must be nonlinear.


As a first step, we will show that all active devices depend on two different excitations. One is the input signal and the other is the dc power supply. This means, as illustrated in Figure 1.4, that amplifiers are transducers that convert the power supplied by a dc power source into output signal power, i.e. they convert dc into RF power.


Now, as the second step in our attempt to prove that any active device must be nonlinear, let us assume, instead, that it could be linear. Then, it would have to obey the additivity property, which means that the response to each of the inputs, the signal and the power supply, should be determined separately. That is, the response to the auxiliary supply and to the signal should be obtained as if the other stimulus would not exist. And we would come back to an amplifier that could amplify the signal power without requiring any auxiliary power, thus violating the energy conservation principle.


Although this argument seems quite convincing, it raises a puzzling question, because, if it is impossible to produce amplifiers without requiring nonlinearity, we should be magicians as we all have already seen and designed linear amplifiers. So, how can we overcome this paradox?





Figure 1.4 Illustration of the power flow in a transducer or amplifier.


According to the power flow shown in Figure 1.4, where PinPin, PoutPout, PdcPdc and PdissPdiss are, respectively, the signal input and output powers, the supplied dc power and the dissipated power (herein assumed as all forms of energy that are not correlated with the information signal, such as heat, harmonic generation, intermodulation distortion, etc.), the amplifier gain, GG, can be defined by


G≡PoutPin(1.12)

And this G must be constant and independent of PinPin for preserving linearity.


Imposing the energy conservation principle to this transducer results in



Pout + Pdiss = Pin + Pdc
Pout+Pdiss=Pin+Pdc
(1.13)

from which the following constraint can be found for the gain:


GPin=1+Pdc−PdissPin(1.14)

Since PdissPdiss cannot decrease below zero (100% dc-to-RF conversion efficiency) and PdcPdc must be limited (as is proper from real power sources), G(Pin)GPin cannot be kept constant but must decrease beyond a certain maximum Pin.


In RF amplifiers, this gain decrease with input signal power is called gain compression. In practice, amplifiers not only exhibit a gain variation when their input amplitude changes, but also an input-dependent phase shift. This is particularly important in RF amplifiers intended to process amplitude modulated signals as this input modulation is capable of inducing nonlinear output amplitude and phase modulations. These are the well-known AM/AM and AM/PM nonlinear distortions, often plotted as shown in Figure 1.5(a) and (b), respectively.





Figure 1.5 Illustration of measured (a) amplitude – AM/AM – and (b) phase-shift – AM/PM – gain variations as a function of input signal amplitude. Please note how these plots are not any idealized lines, but a cloud of dots that reveal hysteretic trajectories.


This analysis shows that linearity can only be obeyed at sufficiently small signal levels, and that it is only a matter of excitation amplitude to make an apparently linear amplifier expose its hidden nonlinearity.


Actually, this study provided us a much deeper insight of linearity and linear systems. Linearity is what we obtain when looking only at the system’s input to output signal mapping (leaving aside the dc-to-RF energy conversion process) and when the signal is a very small perturbation of the dc quiescent point. So, linear systems are the conceptual mathematical model for the behaviors obtained from analytic operators (i.e., that are continuous and infinitely differentiable mappings), when these are excited with signals whose amplitudes are infinitesimally small as compared with the magnitude of the quiescent points. And it is under this small-signal operation regime that the linear approximation is valid. We will come back to this important concept later.




Example 1.2 A Sinusoidal Oscillator


A sinusoidal oscillator is another system that depends on nonlinearity to operate. Although in basic linear system analysis we learned how to predict the stable and unstable regimes of amplifiers, and so to predict oscillations, we were not told the complete story. To understand why, we can just use the above results on the analysis of the amplifier and recognize that, by definition, an oscillator is a system that provides an output even without an input. That is, contrary to an amplifier that is a nonautonomous, or forced, system, an oscillator is an autonomous one. So, if it would not rely on any external source of power, it would violate the energy conservation principle. Like an amplifier, it is, instead, a transducer that converts energy from a dc power supply into signal power at some frequency ω. Hence, like the amplifier, it must rely on some form of nonlinearity. But, unlike the amplifier, in which we have shown that, seen from the input signal to the output signal, it could behave in an approximately linear way, we will now show that not even this is possible in an oscillator.


To see why, consider the following linear differential equation of constant (i.e., time-invariant) coefficients – one of the most common models of linear systems:


LCd2itdt2+RS+RL+RACditdt+it=Cdvstdt(1.15)

which describes the loop current, i(t), sinusoidal oscillations of a series RLC circuit when the excitation vanishes, vs(t) = 0vst=0. The proof that this equation is indeed the model of a linear time-invariant system is left as an exercise for the reader (see Exercise 1.6).


This RLC circuit is assumed to be driven by an active device whose model for the power delivered to the network is the negative resistance RA, and to be loaded by the load resistance RL and the inherent LC tank losses RS. It can be shown that the solution of this equation, when vs(t) = 0vst=0, is of the form



i(t) = Aeλt cos (ωt)
it=Ae−λtcosωt
(1.16)

where λ=RS+RL+RA2L and ω=1LC−λ2.


The first curious result of this linear oscillator model is that it does not provide any prediction for the oscillation amplitude A as if A could be any arbitrary value. The second is that, to keep a steady-state oscillation, i.e., one whose amplitude does not decay or increase exponentially with time, λλ must be exactly (i.e., with infinite precision) zero, or RA =  − (RS + RL)RA=−RS+RL something our engineering common sense finds hard to believe. Both of these unreasonable conditions are a consequence of the absence of any energy constraint in (1.15), which, itself, is a consequence of the performed linearization. In practice, what happens is that the active device is nonlinear, its negative resistance is not constant but an increasing function of amplitude, RA(A) =  − f(A)RAA=−fA, so that a negative feedback process keeps the oscillation amplitude constant at A = f−1(RS + RL)A=f−1RS+RL, in which f−1(.)f−1. represents the inverse function of f(.)f..


Although nonlinearity is often seen as a source of perturbation, referred to with many terms, such as harmonic distortion, nonlinear cross-talk, desensitization, or intermodulation distortion, it plays a key role in wireless communications. As a matter of fact, these two examples, along with the amplitude modulator of Exercises 1.31.5, show that nonlinearity is essential for amplifiers, oscillators, modulators, and demodulators. And since wireless telecommunication systems depend on these devices to generate RF carriers, translate information base-band signals back and forth to radio-frequency frequencies (reducing the size of the antennas), and provide amplification to compensate for the free-space path loss, we easily conclude that without nonlinearity wireless communications would be impossible. As an illustration, Figure 1.6 shows the block diagram of a wireless transmitter where the blocks from which nonlinearity should be expected are put in evidence.





Figure 1.6 Block diagram of a wireless transmitter where the blocks from which nonlinearity is expected are highlighted: linear blocks are represented within dashed line boxes whereas nonlinear ones are drawn within solid line boxes.



1.4 Properties of Nonlinear Systems


This section illustrates the multiplicity of behaviors that can be found in nonlinear dynamic systems. Although a full mathematical analysis of those responses does not constitute a key objective of this chapter, we will nevertheless base our tests in a simple circuit so that each of the observed behaviors can be approximately explained by relating it to the circuit topology and components.


The analyses will be divided in forced and autonomous regimes, like the ones found in amplifiers or frequency multipliers and oscillators, respectively. However, to obtain a more applied view of these responses we will further group forced regimes in responses to CW and modulated excitations.



1.4.1 An Example of a Nonlinear Dynamic Circuit


To start exploring some of the basic properties of nonlinear systems, we will use the simple (conceptual) amplifier shown in Figure 1.7.





Figure 1.7 (a) Conceptual amplifier circuit used to illustrate some properties of forced nonlinear systems. (b) Equivalent circuit when the dc block capacitors and the dc feed inductances are substituted by their corresponding dc sources. (c) Simplified unilateral circuit after Cgd was reflected to the input via its Miller equivalent.


In order to preserve the desired simplicity, enabling us to qualitatively relate the obtained responses to the circuit’s model, we will assume that the input and output block capacitors, CB, and the RF bias chokes, LCh, are short-circuits and open circuits to the RF signals, respectively. Therefore, the dc blocking capacitors can be simply replaced by two ideal dc voltage sources, the gate RF choke can be neglected, and the drain choke must be replaced by a dc current source that equals the average iDS(t) current, IDS. This is illustrated in Figure 1.7(b). Furthermore, we will also assume that the FET’s feedback capacitor, Cgd, can be replaced by its input and output reflected Miller capacitances, which value Cgd_in = Cgd(1−Av) and Cgd_out = Cgd(Av−1)/Av, respectively. Assuming that the voltage gain, Av, is negative and much higher than one (rigorously speaking, much smaller than minus one) the total FET’s input capacitance, Ci, will be approximately given by Ci = Cgs + Cgd|Av| while the output capacitance, Co, will equal Cgd, being thus negligible. Under these conditions, the schematic of Figure 1.7(b) becomes the one shown in Figure 1.7(c), whose analysis, using Kirchhoff’s laws, leads to


vSt=RSiGt−VGG+L1diGtdt+vGSt(1.17)

and



vDS(t) = VDD − RL[iDS(t) − IDS]
vDSt=VDD−RLiDSt−IDS
(1.18)

in which iGt=CidvGStdt is the gate current flowing through Ci and iDS(t)iDSt is the FET’s drain-to-source current. This drain current is assumed to be some suitable static nonlinear function of the gate-to-source voltage, vGS(t), and drain-to-source voltage, vDS(t).


In case vDS(t) is kept sufficiently high so that vDS(t) >> VK, the FET’s knee voltage, iDS(t) can be considered only dependent on the input voltage, iDS(vGS). Using these results in (1.17), the differentiation chain rule leads to a second order differential equation


vSt=RSCidvGStdt−VGG+L1Cid2vGStdt2+vGSt(1.19)

whose solution, vGS(t)vGSt, allows the determination of the amplifier output voltage as



vo(t) = vDS(t) − VDD =  − RL{iDS[vGS(t)] − IDS}
vot=vDSt−VDD=−RLiDSvGSt−IDS
(1.20)


1.4.2 Response to CW Excitations


Because of its central role played in RF circuits, we will first start by identifying the responses to sinusoidal, or CW, (plus the dc bias) excitations. Hence, our amplifier is described by a circuit that includes two nonlinearities: iDS(vGS,vDS) that is static and Ci, a nonlinear capacitor, which, depending on the voltage gain, evidences its nonlinearity when the amplifier suffers from iDS(vGS,vDS) induced gain compression.


Figure 1.8 shows the drain-source voltage evolution in time, vDS(t), for three different CW excitation amplitudes, while Figure 1.9 depicts the respective spectra. Under small-signal regime, i.e, small excitation amplitudes, in which the FET is kept in the saturation region and vgsvgs (the signal component of the composite vGS voltage defined by vgs ≡ vGS − VGSvgs≡vGS−VGS and, in this case, VGS = VGGVGS=VGG) is so small that iDSvGSvDS≈IDS+gmvgs+gm2vgs2+gm3vgs3≈IDS+gmvgs, the amplifier presents an almost linear response without any other harmonics than the dc and the fundamental component already present at the input. As we increase the excitation amplitude, the vGS(t) and vDS(t) voltage swings become sufficiently large to excite the FET’s iDS(vGS,vDS) cutoff (vGS(t) < VT, the FET’s threshold voltage) and knee voltage nonlinearities (vDS(t) ≈ VK), and the amplifier starts to evidence its nonlinear behavior, producing other frequency-domain harmonic components or time-domain distorted waveforms.





Figure 1.8 vDS(t) voltage evolution in time for three different CW excitation amplitudes. Note the distorted waveforms arising when the excitation input is increased.





Figure 1.9 Vds(ω) spectrum for three different CW excitation amplitudes. Note the increase in the harmonic content with the excitation amplitude.


In a cubic nonlinearity as the one above used for the dependence of iDS on vGS, this means that a sinusoidal excitation, vs(t) = Acos(ωt)vst=Acosωt, produces a gate-source voltage of vgs(t) = |Vgs(ω)| cos (ωt + ϕi)vgst=Vgsωcosωt+ϕi, in which


Vgsω=A1−ω2L1Ci+jωRSCi=HiωA=HiωejϕiA(1.21)

produces the following ids(t) response:


idst≈gmHiωAcosωt+ϕi+12gm2Hiω2A2+12gm2Hiω2A2cos2ωt+2ϕi+34gm3Hiω3A3cosωt+ϕi+14gm3Hiω3A3cos3ωt+3ϕi(1.22)

which evidences the generation of a linear component, proportional to the input stimulus, a quadratic dc component and second and third harmonics, beyond a cubic term at the fundamental. Actually, it is this fundamental cubic term that is responsible for modeling the amplifier’s gain compression since the equivalent amplifier transconductance gain is


GmA≡IdsωA≈gmHiω+34gm3HiωHiω2A2(1.23)

i.e., is dependent on the input amplitude.


In communication systems, in which an RF sinusoidal carrier is modulated with some amplitude and phase information (the so-called complex envelope [1]), such a gain variation with input amplitude has always deserved particular attention, since it describes how the input amplitude and phase information are changed by the amplifier. This defines the so-called AM/AM and AM/PM distortions. That is what is represented in Figures 1.10 and 1.11, in which the input–output fundamental carrier amplitude and phase (with respect to the input phase reference) is shown versus the input amplitude. When the output voltage waveform becomes progressively limited by the FET’s nonlinearities, its corresponding gain at the fundamental component gets compressed and its phase-lag reduced. Indeed, when the voltage gain is reduced, so is the Cgd Miller reflected input capacitance, and thus Ci. Hence, the phase of Hi(ω) increases, and consequently, the vGS(t) fundamental component shows an apparent phase-lead (actually a reduced phase-lag), revealed as the AM/PM of Figure 1.11.





Figure 1.10 Response to CW excitations: (a) AM/AM characteristic and (b) Gain characteristic.





Figure 1.11 Response to CW excitations: AM/PM characteristic.


What happens in real amplifiers (see, for example, [2]) is that, because of the circuit nonlinear reactive components, or, as was here the case, because of the interactions between linear reactive components (Cgd) and static nonlinearities (iDS(vGS,vDS), which manifests itself as a nonlinear voltage gain, Av), the equivalent gain suffers a change in both magnitude and phase manifested under CW excitation as AM/AM and AM/PM.



1.4.3 Response to Multitone or Modulated Signals


The extension of the CW regime to a modulated one is trivial, as long as we can assume that the circuit responds to a modulated signal like



vS(τt) = a(τ) cos [ωct + ϕ(τ)]
vSτt=aτcosωct+ϕτ
(1.24)

without presenting any memory to the envelope. This means that the circuit treats our modulated carrier as a succession of independent CW signals of amplitude a(τ) and phase ϕ(τ). That is, we are implicitly assuming that the envelope varies in a much slower and uncorrelated way – as compared to the carrier – (the narrow bandwidth approximation), as if the RF carrier would evolve in a fast time t while the base-band envelope would evolve in a much slower time τ. In that case, for example, a two-tone signal of frequencies ω1 = ωcωm and ω2 = ωc + ωm, i.e., whose frequency separation is 2ωm and is centered at ωc, and in which ωc >> ωm, can be seen as a double sideband amplitude modulated signal of the form


Mar 16, 2021 | Posted by in Circuit Design, Theory and Analysis | Comments Off on 1 – Linear and Nonlinear Circuits
Premium Wordpress Themes by UFO Themes