This chapter has a two-fold objective. First, it introduces the nomenclature that will be used throughout the book. Second, it presents the basic mathematical theory necessary to describe nonlinear systems, which will help the reader to understand their rich set of behaviors. This will clarify several important distinctions between linear and nonlinear circuits and their mathematical representations.

We shall start with a brief review of linearity and linear systems, their main properties and underlying assumptions. A reader familiarized with the linear system realm can understand the limitations of the theoretical abstraction framed in the linearity mathematical concept, realizing its validity borders and so be prepared to cross them, i.e., to enter the natural world of nonlinearity. We will then introduce nonlinear systems and the responses that we should expect from them. After this, we will study one static, or memoryless, nonlinearity and a dynamic one, i.e., one that exhibits memory. This will then establish the foundations of nonlinear static and dynamic models and their basic extraction procedures.

The chapter is presented as follows: Section 1.1 is devoted to nomenclature and Section 1.2 reviews linear system theory. Sections 1.3 and 1.4 illustrate the types of behaviors found in general nonlinear systems and, in particular, in nonlinear RF and microwave circuits. Then, Sections 1.5 and 1.6 present the theory of nonlinear static and dynamic systems that will be useful to understand the nonlinear circuit simulation algorithms treated in Chapter 2 and the device modeling techniques of Chapters 3–6. Mathematics of nonlinear systems, and in particular dynamic ones, is not easy or trivial. So, we urge you to not feel discouraged if you do not understand it after your first read. What you will find in the next chapters will certainly help provide a physical meaning and practical usefulness to most of these sometimes abstract mathematical formulations. Finally, Section 1.7 closes this chapter with a brief conclusion.

### 1.1 Basic Definitions

We will frequently use the notion of model and system, so it is convenient to first identify these concepts.

#### 1.1.1 Model

A model is a ** mathematical description**, or representation, of a set of particular features of a physical entity that combines the observable (i.e., measurable) magnitudes and our previous knowledge about that entity. Models enable the simulation of a physical entity and so allow a better understanding of its observed behavior and provide predictions of behaviors not yet observed. As models are simplifications of the physically observable, they are, by definition, an approximation and restricted to represent a subset of all possible behaviors of the physical device.

#### 1.1.2 System

As depicted in Figure 1.1, a system is a model of a machine or mechanism that transforms an input (excitation, or stimulus, usually assumed as a function of time), *x*(*t*), into an output (or response, also varying in time), *y*(*t*). Mathematically, it is defined as the following operator: *y*(*t*) = *S*[*x*(*t*)], in which *x*(*t*) and *y*(*t*) are, themselves, mathematical representations of the input and output measurable signals, respectively. Please note that, contrary to ordinary mathematical functions, which operate on numbers (i.e., that for a given input number, *x*, they respond with an output number, *y* = *f*(*x*)), mathematical operators map functions, such as *x*(*t*), onto other functions, *y*(*t*). So, they are also known as *mathematical function maps*. And, similar to what is required for functions, a particular input must be mapped onto a particular, unique, output.

Figure 1.1 Illustration of the system concept.

When the operator is such that its response at a particular instant of time, *y*(*t*_{0}), is only dependent on that particular input instant, *x*(*t*_{0}), i.e., the system transforms each input value onto the corresponding output value, the operator is reduced to a function and the system is said to be ** static or memoryless**. When, on the other hand, the system output cannot be uniquely determined from the instantaneous input only but depends on

*x*(

*t*

_{0}) and its

*x*(

*t*) past and future values,

*x*(

*t*±

*τ*), i.e., the system is now an operator of the whole

*x*(

*t*) onto

*y*(

*t*), we say that the system is

**or that it exhibits**

*dynamic***. (In practice, real systems cannot depend on future values because they must be causal.) For example, resistive networks are static systems, whereas networks that include energy storage elements (memory), such as capacitors, inductors or transmission lines, are dynamic.**

*memory*Defined this way, this notion of a system can be used as a representation, or model, of any physical device, which can either be an individual component, a circuit or a set of circuit blocks. An interesting feature of this definition is that a system is nestable, i.e., it is such that a block (circuit) made of interconnected individual systems (circuit elements or components) can still be treated as a system. So, we will use this concept of system whenever we want to refer to the properties that we normally observe in components or circuits.

#### 1.1.3 Time Invariance

Although the system response, *y*(*t*), varies in time, that does not necessarily mean that the system varies in time. The change in time of the response can be only a direct consequence of the input variation with time. This time-invariance of the operator is expressed by stating that the system reacts exactly in the same way regardless at which time it is subjected to the same input. That is, if the response to *x*(*t*) is *y*(*t*) = *S*[*x*(*t*)], and another test is made after a certain amount of time, *τ*, then the response will be exactly the same as before, except that now it will be naturally delayed by that same amount of time *y*(*t − τ*) = *S*[*x*(*t − τ*)]. This defines a ** time-invariant** system. If, on the other hand,

*y*(

*t − τ*) ≠

*S*[

*x*(

*t − τ*)], then the system is said to be

**.**

*time-variant*The vast majority of physical systems, and thus of electronic circuits, are time-invariant. Therefore, we will assume that all systems referred to in this and succeeding chapters are time-invariant unless otherwise explicitly stated.

After finalizing the study of this chapter, the reader may try Exercise 1.5 which constitutes a good example of how we can make use of this time-variance property for enabling us to treat, as a much simpler linear time-variant system, a modulator that is inherently nonlinear and time-invariant.

### 1.2 Linearity and the Separation of Effects

Now we will define a linear system as one that obeys superposition and recall how we use this property to determine the response of a linear system to a general excitation.

#### 1.2.1 Superposition

A system is said to be linear if it obeys the principle of superposition, i.e., if it shares the properties of additivity and homogeneity.

The additivity property means that if *y*_{1}(*t*) is the system response to *x*_{1}(*t*), *y*_{1}(*t*) = *S*[*x*_{1}(*t*)], *y*_{2}(*t*) is the system’s response to *x*_{2}(*t*), *y*_{2}(*t*) = *S*[*x*_{2}(*t*)], and *y*_{T}(*t*) is the response to *x*_{1}(*t*) + *x*_{2}(*t*), then

*y*

_{T}(

*t*) =

*S*[

*x*

_{1}(

*t*) +

*x*

_{2}(

*t*)] =

*S*[

*x*

_{1}(

*t*)] +

*S*[

*x*

_{2}(

*t*)] =

*y*

_{1}(

*t*) +

*y*

_{2}(

*t*)

The additivity property is the mathematical statement that affirms that a linear system reacts to an additive composition of stimuli as an additive composition of responses, as if the system could distinguish each of the stimuli and treat them separately. In practical terms, this would mean that, if, in the lab, the result of an experiment with a cause *x*_{1}(*t*) would produce an effect *y*_{1}(*t*), and another, independent, experiment, on another cause *x*_{2}(*t*), would produce *y*_{2}(*t*), then, a third experiment, now made on a third stimulus *x*_{1}(*t*) + *x*_{2}(*t*), would produce a response that is the numerical summation of the two previously obtained effects *y*_{1}(*t*) + *y*_{2}(*t*).

On the other hand, the homogeneity property means that if *α* is a constant, then the response to *αx*(*t*) will be *αy*(*t*), i.e.,

*S*[

*αx*(

*t*)] =

*αS*[

*x*(

*t*)] =

*αy*(

*t*)

The homogeneity property is the mathematical description of proportionality that says that an *α* times larger cause produces an *α* times larger effect. However, it does not necessarily state that the effects are proportional to their corresponding causes. For example, although the current and the voltage in a constant (linear) capacitance obey the homogeneity principle, they are not proportional to each other. In fact, since the current in a capacitor is given by (1.3), the current to a twice as large *v*_{c}(*t*) will be twice as large as *i*_{c}(*t*). However, that does not mean that *i*_{c}(*t*) is proportional to *v*_{c}(*t*), as can be readily noticed when *v*_{c}(*t*) is a ramp in time and *i*_{c}(*t*) is a constant.

In summary, ** linear systems** obey the principle of

**,**

*superposition*#### 1.2.2 Response of a Linear System to a General Excitation

Superposition has very useful consequences that we now briefly review. They all revolve around that idea of the separation of effects, whereby we can expand any previously untested stimulus into a summation of previously tested excitations, making general predictions about the system responses.

##### 1.2.2.1 Linear Response in the Time Domain

In the time domain, this means that, if we represent any input, *x*(*t*), as composed of the succession of its time samples, taken at regular intervals, *T*_{s}, of a constant sampling frequency *f*_{s} = 1/*T*_{s}, so that they asymptotically produce the same effect of *x*(*t*), *x*(*nT*_{s})*T*_{s},

in which *δ*(*t* − *nT*_{s})δt−nTs is the Dirac delta, or impulse, function centered at *nT*_{s}, where *n* is the number of samples, (see Figure 1.2(a)), and we know the response of the system to one of these impulse functions of unity amplitude, *h*(*t*) = *S*[*δ*(*t*)]ht=Sδt (see Figure 1.2(b)), then we can readily predict the response to any arbitrary input *x*(*t*) as

by simply making use of the additivity and homogeneity properties (as shown in Figure 1.2(c)). Expression (1.6) is exact in the limit when the sampling interval, *T*_{s}, tends to zero and *N* tends to infinity, becoming the well-known convolution integral:

Figure 1.2. Response, *y*(*t*)yt, of a linear dynamic and time-invariant system to an arbitrary input, *x*(*t*)xt, when this stimulus is expanded in a summation of Dirac delta functions. (a) Input expansion with the base of delayed Dirac delta functions *x*(*n*) = *x*(*nT*_{s})*δ*(*t* − *nT*_{s})xn=xnTsδt−nTs. (b) Impulse response of the system, *h*(*t*) = *S*[*δ*(*t*)]ht=Sδt. (c) Response of the system to *x*(*t*)xt, *y*(*t*) = *S*[*x*(*t*)]yt=Sxt.

##### 1.2.2.2 Linear Response in the Frequency Domain

So, in time domain, we only needed to know the system response to one input basis function – the impulse response, *h*(*t*) = *S*[*δ*(*t*)]ht=Sδt, to be able to predict the response to any other arbitrary input. Similarly, in the frequency domain we only need to know the response to one input basis function, the cosine, although tested at all frequencies, to predict the response to any arbitrary periodic input.

Actually, since the cosine can be given as the additive combination of two complex exponentials

from a mathematical viewpoint, we only need to know the response to that basic complex exponential. This response can be obtained from (1.7) as

in which *H*(*ω*)Hω is the Fourier transform of *h*(*τ*)hτ. This is an interesting result that tells us that the response to an arbitrary *x*(*t*) can be easily computed by summing up the Fourier components of that input scaled by the system’s response to each particular frequency. Indeed, if *R*(*ω*)Rω is the frequency-domain Fourier representation of a time-domain signal *r*(*t*), so that

and

then, the substitution of (1.10) into (1.7) would lead to

*Y*(

*ω*) =

*H*(

*ω*)

*X*(

*ω*)

where *Y*(*ω*)Yω can be related to *y*(*t*) – as *X*(*ω*)Xω is related to *x*(*t*) – by the Fourier transform of (1.10). This expression tells us the following two important things.

First, the time-domain convolution of (1.7) between the input, *x*(*t*), and the impulse response, *h*(*τ*)hτ, becomes the product of the frequency-domain representation of these two entities, *X*(*ω*)Xω and *H*(*ω*)Hω, respectively.

Second, the response of a linear time-invariant system to a continuous-wave (CW) signal (an unmodulated carrier of frequency *ω*ω, specifically cos (*ωt*)cosωt) is another CW signal of the same frequency with, possibly, different amplitude and phase. Consequently, the response to a signal of complex spectrum will only have frequency-domain components at the frequencies already present at the input. A time-invariant linear system is incapable of generating new frequency components or of performing any qualitative transformation of the input spectrum.

Finally, equation (1.11) tells us that, in the same way we only needed to know the system’s impulse response to be able to predict the response to any arbitrary stimulus in the time domain, we just need to know *H*(*ω*) to predict the response to any arbitrary periodic input described in the frequency domain. As an illustration, Figure 1.3 depicts the measured transfer function *S*_{21}(*ω*), in amplitude and phase, of a microwave filter.

### 1.3 Nonlinearity: The Lack of Superposition

As all of us have been extensively taught and trained in working with linear systems, and with the additivity and homogeneity properties being so intuitive, we may easily fall into the trap of believing that these should be properties naturally inherent to all physical systems. But this is not the case. In fact, most of macroscopic physical systems behave very differently from linear systems, i.e., they are not linear. Actually, we use the term nonlinear systems to identify them.

Since we have been making the effort to define all important concepts used so far, we should start by defining a nonlinear system. But that is not a straightforward task as there is no general definition for these systems. There is only the unsatisfying definition of defining something by what it is not: a nonlinear system is one that is not linear, i.e., a nonlinear system is one that does not obey the principle of superposition. This is an intriguing, but also revealing, situation, which tells us that if linear systems are the ones that obey a precise mathematical principle, nonlinear systems are all the other ones. Hence, from an engineering standpoint the relevant question to be answered is: Are nonlinear systems often seen, or used, in practice? To demonstrate their importance, let us try a couple of very common, RF electronic examples. But, before these, the reader may want to try the two simpler examples discussed in Exercises 1.1–1.4.

In this example we will show that any active device must be nonlinear.

As a first step, we will show that all active devices depend on two different excitations. One is the input signal and the other is the dc power supply. This means, as illustrated in Figure 1.4, that amplifiers are transducers that convert the power supplied by a dc power source into output signal power, i.e. they convert dc into RF power.

Now, as the second step in our attempt to prove that any active device must be nonlinear, let us assume, instead, that it could be linear. Then, it would have to obey the additivity property, which means that the response to each of the inputs, the signal and the power supply, should be determined separately. That is, the response to the auxiliary supply and to the signal should be obtained as if the other stimulus would not exist. And we would come back to an amplifier that could amplify the signal power without requiring any auxiliary power, thus violating the energy conservation principle.

Although this argument seems quite convincing, it raises a puzzling question, because, if it is impossible to produce amplifiers without requiring nonlinearity, we should be magicians as we all have already seen and designed linear amplifiers. So, how can we overcome this paradox?

Figure 1.4 Illustration of the power flow in a transducer or amplifier.

According to the power flow shown in Figure 1.4, where *P*_{in}Pin, *P*_{out}Pout, *P*_{dc}Pdc and *P*_{diss}Pdiss are, respectively, the signal input and output powers, the supplied dc power and the dissipated power (herein assumed as all forms of energy that are not correlated with the information signal, such as heat, harmonic generation, intermodulation distortion, etc.), the amplifier gain, *G*G, can be defined by

And this *G* must be constant and independent of *P*_{in}Pin for preserving linearity.

Imposing the energy conservation principle to this transducer results in

*P*

_{out}+

*P*

_{diss}=

*P*

_{in}+

*P*

_{dc}

from which the following constraint can be found for the gain:

Since *P*_{diss}Pdiss cannot decrease below zero (100% dc-to-RF conversion efficiency) and *P*_{dc}Pdc must be limited (as is proper from real power sources), *G*(*P*_{in})GPin cannot be kept constant but must decrease beyond a certain maximum *P*_{in}.

In RF amplifiers, this gain decrease with input signal power is called gain compression. In practice, amplifiers not only exhibit a gain variation when their input amplitude changes, but also an input-dependent phase shift. This is particularly important in RF amplifiers intended to process amplitude modulated signals as this input modulation is capable of inducing nonlinear output amplitude and phase modulations. These are the well-known AM/AM and AM/PM nonlinear distortions, often plotted as shown in Figure 1.5(a) and (b), respectively.

This analysis shows that linearity can only be obeyed at sufficiently small signal levels, and that it is only a matter of excitation amplitude to make an apparently linear amplifier expose its hidden nonlinearity.

Actually, this study provided us a much deeper insight of linearity and linear systems. Linearity is what we obtain when looking only at the system’s input to output signal mapping (leaving aside the dc-to-RF energy conversion process) and when the signal is a very small perturbation of the dc quiescent point. So, linear systems are the conceptual mathematical model for the behaviors obtained from analytic operators (i.e., that are continuous and infinitely differentiable mappings), when these are excited with signals whose amplitudes are infinitesimally small as compared with the magnitude of the quiescent points. And it is under this small-signal operation regime that the linear approximation is valid. We will come back to this important concept later.

A sinusoidal oscillator is another system that depends on nonlinearity to operate. Although in basic linear system analysis we learned how to predict the stable and unstable regimes of amplifiers, and so to predict oscillations, we were not told the complete story. To understand why, we can just use the above results on the analysis of the amplifier and recognize that, by definition, an oscillator is a system that provides an output even without an input. That is, contrary to an amplifier that is a nonautonomous, or forced, system, an oscillator is an autonomous one. So, if it would not rely on any external source of power, it would violate the energy conservation principle. Like an amplifier, it is, instead, a transducer that converts energy from a dc power supply into signal power at some frequency *ω*. Hence, like the amplifier, it must rely on some form of nonlinearity. But, unlike the amplifier, in which we have shown that, seen from the input signal to the output signal, it could behave in an approximately linear way, we will now show that not even this is possible in an oscillator.

To see why, consider the following linear differential equation of constant (i.e., time-invariant) coefficients – one of the most common models of linear systems:

which describes the loop current, *i*(*t*), sinusoidal oscillations of a series RLC circuit when the excitation vanishes, *v*_{s}(*t*) = 0vst=0. The proof that this equation is indeed the model of a linear time-invariant system is left as an exercise for the reader (see Exercise 1.6).

This RLC circuit is assumed to be driven by an active device whose model for the power delivered to the network is the negative resistance *R*_{A}, and to be loaded by the load resistance *R*_{L} and the inherent LC tank losses *R*_{S}. It can be shown that the solution of this equation, when *v*_{s}(*t*) = 0vst=0, is of the form

*i*(

*t*) =

*Ae*

^{−λt}cos (

*ωt*)

where λ=RS+RL+RA2L and ω=1LC−λ2.

The first curious result of this linear oscillator model is that it does not provide any prediction for the oscillation amplitude *A* as if *A* could be any arbitrary value. The second is that, to keep a steady-state oscillation, i.e., one whose amplitude does not decay or increase exponentially with time, *λ*λ must be exactly (i.e., with infinite precision) zero, or *R*_{A} = − (*R*_{S} + *R*_{L})RA=−RS+RL something our engineering common sense finds hard to believe. Both of these unreasonable conditions are a consequence of the absence of any energy constraint in (1.15), which, itself, is a consequence of the performed linearization. In practice, what happens is that the active device is nonlinear, its negative resistance is not constant but an increasing function of amplitude, *R*_{A}(*A*) = − *f*(*A*)RAA=−fA, so that a negative feedback process keeps the oscillation amplitude constant at *A* = *f*^{−1}(*R*_{S} + *R*_{L})A=f−1RS+RL, in which *f*^{−1}(.)f−1. represents the inverse function of *f*(.)f..

Although nonlinearity is often seen as a source of perturbation, referred to with many terms, such as *harmonic distortion, nonlinear cross-talk, desensitization*, or *intermodulation distortion*, it plays a key role in wireless communications. As a matter of fact, these two examples, along with the amplitude modulator of Exercises 1.3–1.5, show that nonlinearity is essential for amplifiers, oscillators, modulators, and demodulators. And since wireless telecommunication systems depend on these devices to generate RF carriers, translate information base-band signals back and forth to radio-frequency frequencies (reducing the size of the antennas), and provide amplification to compensate for the free-space path loss, we easily conclude that without nonlinearity wireless communications would be impossible. As an illustration, Figure 1.6 shows the block diagram of a wireless transmitter where the blocks from which nonlinearity should be expected are put in evidence.

Figure 1.6 Block diagram of a wireless transmitter where the blocks from which nonlinearity is expected are highlighted: linear blocks are represented within dashed line boxes whereas nonlinear ones are drawn within solid line boxes.

### 1.4 Properties of Nonlinear Systems

This section illustrates the multiplicity of behaviors that can be found in nonlinear dynamic systems. Although a full mathematical analysis of those responses does not constitute a key objective of this chapter, we will nevertheless base our tests in a simple circuit so that each of the observed behaviors can be approximately explained by relating it to the circuit topology and components.

The analyses will be divided in forced and autonomous regimes, like the ones found in amplifiers or frequency multipliers and oscillators, respectively. However, to obtain a more applied view of these responses we will further group forced regimes in responses to CW and modulated excitations.

#### 1.4.1 An Example of a Nonlinear Dynamic Circuit

To start exploring some of the basic properties of nonlinear systems, we will use the simple (conceptual) amplifier shown in Figure 1.7.

Figure 1.7 (a) Conceptual amplifier circuit used to illustrate some properties of forced nonlinear systems. (b) Equivalent circuit when the dc block capacitors and the dc feed inductances are substituted by their corresponding dc sources. (c) Simplified unilateral circuit after *C*_{gd} was reflected to the input via its Miller equivalent.

In order to preserve the desired simplicity, enabling us to qualitatively relate the obtained responses to the circuit’s model, we will assume that the input and output block capacitors, *C*_{B}, and the RF bias chokes, *L*_{Ch}, are short-circuits and open circuits to the RF signals, respectively. Therefore, the dc blocking capacitors can be simply replaced by two ideal dc voltage sources, the gate RF choke can be neglected, and the drain choke must be replaced by a dc current source that equals the average *i*_{DS}(*t*) current, *I*_{DS}. This is illustrated in Figure 1.7(b). Furthermore, we will also assume that the FET’s feedback capacitor, *C*_{gd}, can be replaced by its input and output reflected Miller capacitances, which value *C*_{gd_in} = *C*_{gd}(1−*A*_{v}) and *C*_{gd_out} = *C*_{gd}(*A*_{v}−1)/*A*_{v}, respectively. Assuming that the voltage gain, *A*_{v}, is negative and much higher than one (rigorously speaking, much smaller than minus one) the total FET’s input capacitance, *C*_{i}, will be approximately given by *C*_{i} = *C*_{gs} + *C*_{gd}|*A*_{v}| while the output capacitance, *C*_{o}, will equal *C*_{gd}, being thus negligible. Under these conditions, the schematic of Figure 1.7(b) becomes the one shown in Figure 1.7(c), whose analysis, using Kirchhoff’s laws, leads to

and

*v*

_{DS}(

*t*) =

*V*

_{DD}−

*R*

_{L}[

*i*

_{DS}(

*t*) −

*I*

_{DS}]

in which iGt=CidvGStdt is the gate current flowing through *C*_{i} and *i*_{DS}(*t*)iDSt is the FET’s drain-to-source current. This drain current is assumed to be some suitable static nonlinear function of the gate-to-source voltage, *v*_{GS}(*t*), and drain-to-source voltage, *v*_{DS}(*t*).

In case *v*_{DS}(*t*) is kept sufficiently high so that *v*_{DS}(*t*) *>> V*_{K}, the FET’s knee voltage, *i*_{DS}(*t*) can be considered only dependent on the input voltage, *i*_{DS}(*v*_{GS}). Using these results in (1.17), the differentiation chain rule leads to a second order differential equation

whose solution, *v*_{GS}(*t*)vGSt, allows the determination of the amplifier output voltage as

*v*

_{o}(

*t*) =

*v*

_{DS}(

*t*) −

*V*

_{DD}= −

*R*

_{L}{

*i*

_{DS}[

*v*

_{GS}(

*t*)] −

*I*

_{DS}}

#### 1.4.2 Response to CW Excitations

Because of its central role played in RF circuits, we will first start by identifying the responses to sinusoidal, or CW, (plus the dc bias) excitations. Hence, our amplifier is described by a circuit that includes two nonlinearities: *i*_{DS}(*v*_{GS},*v*_{DS}) that is static and *C*_{i}, a nonlinear capacitor, which, depending on the voltage gain, evidences its nonlinearity when the amplifier suffers from *i*_{DS}(*v*_{GS},*v*_{DS}) induced gain compression.

Figure 1.8 shows the drain-source voltage evolution in time, *v*_{DS}(*t*), for three different CW excitation amplitudes, while Figure 1.9 depicts the respective spectra. Under small-signal regime, i.e, small excitation amplitudes, in which the FET is kept in the saturation region and *v*_{gs}vgs (the signal component of the composite *v*_{GS} voltage defined by *v*_{gs} ≡ *v*_{GS} − *V*_{GS}vgs≡vGS−VGS and, in this case, *V*_{GS} = *V*_{GG}VGS=VGG) is so small that iDSvGSvDS≈IDS+gmvgs+gm2vgs2+gm3vgs3≈IDS+gmvgs, the amplifier presents an almost linear response without any other harmonics than the dc and the fundamental component already present at the input. As we increase the excitation amplitude, the *v*_{GS}(*t*) and *v*_{DS}(*t*) voltage swings become sufficiently large to excite the FET’s *i*_{DS}(*v*_{GS},*v*_{DS}) cutoff (*v*_{GS}(*t*) < *V*_{T}, the FET’s threshold voltage) and knee voltage nonlinearities (*v*_{DS}(*t*) ≈ *V*_{K}), and the amplifier starts to evidence its nonlinear behavior, producing other frequency-domain harmonic components or time-domain distorted waveforms.

Figure 1.8 *v*_{DS}(*t*) voltage evolution in time for three different CW excitation amplitudes. Note the distorted waveforms arising when the excitation input is increased.

Figure 1.9 *V*_{ds}(*ω*) spectrum for three different CW excitation amplitudes. Note the increase in the harmonic content with the excitation amplitude.

In a cubic nonlinearity as the one above used for the dependence of *i*_{DS} on *v*_{GS}, this means that a sinusoidal excitation, *v*_{s}(*t*) = *Acos*(*ωt*)vst=Acosωt, produces a gate-source voltage of *v*_{gs}(*t*) = |*V*_{gs}(*ω*)| *cos* (*ωt* + *ϕ*_{i})vgst=Vgsωcosωt+ϕi, in which

produces the following *i*_{ds}(*t*) response:

which evidences the generation of a linear component, proportional to the input stimulus, a quadratic dc component and second and third harmonics, beyond a cubic term at the fundamental. Actually, it is this fundamental cubic term that is responsible for modeling the amplifier’s gain compression since the equivalent amplifier transconductance gain is

i.e., is dependent on the input amplitude.

In communication systems, in which an RF sinusoidal carrier is modulated with some amplitude and phase information (the so-called complex envelope [1]), such a gain variation with input amplitude has always deserved particular attention, since it describes how the input amplitude and phase information are changed by the amplifier. This defines the so-called AM/AM and AM/PM distortions. That is what is represented in Figures 1.10 and 1.11, in which the input–output fundamental carrier amplitude and phase (with respect to the input phase reference) is shown versus the input amplitude. When the output voltage waveform becomes progressively limited by the FET’s nonlinearities, its corresponding gain at the fundamental component gets compressed and its phase-lag reduced. Indeed, when the voltage gain is reduced, so is the *C*_{gd} Miller reflected input capacitance, and thus *C*_{i}. Hence, the phase of *H*_{i}(*ω*) increases, and consequently, the *v*_{GS}(*t*) fundamental component shows an apparent phase-lead (actually a reduced phase-lag), revealed as the AM/PM of Figure 1.11.

Figure 1.11 Response to CW excitations: AM/PM characteristic.

What happens in real amplifiers (see, for example, [2]) is that, because of the circuit nonlinear reactive components, or, as was here the case, because of the interactions between linear reactive components (*C*_{gd}) and static nonlinearities (*i*_{DS}(*v*_{GS},*v*_{DS}), which manifests itself as a nonlinear voltage gain, *A*_{v}), the equivalent gain suffers a change in both magnitude and phase manifested under CW excitation as AM/AM and AM/PM.

#### 1.4.3 Response to Multitone or Modulated Signals

The extension of the CW regime to a modulated one is trivial, as long as we can assume that the circuit responds to a modulated signal like

*v*

_{S}(

*τ*,

*t*) =

*a*(

*τ*) cos [

*ω*

_{c}

*t*+

*ϕ*(

*τ*)]

without presenting any memory to the envelope. This means that the circuit treats our modulated carrier as a succession of independent CW signals of amplitude *a*(*τ*) and phase *ϕ*(*τ*). That is, we are implicitly assuming that the envelope varies in a much slower and uncorrelated way – as compared to the carrier – (the narrow bandwidth approximation), as if the RF carrier would evolve in a fast time *t* while the base-band envelope would evolve in a much slower time *τ*. In that case, for example, a two-tone signal of frequencies *ω*_{1} = *ω*_{c}−*ω*_{m} and *ω*_{2} = *ω*_{c} + *ω*_{m}, i.e., whose frequency separation is 2*ω*_{m} and is centered at *ω*_{c}, and in which *ω*_{c} >> *ω*_{m}, can be seen as a double sideband amplitude modulated signal of the form