2 – Basic Nonlinear Microwave Circuit Analysis Techniques




2 Basic Nonlinear Microwave Circuit Analysis Techniques



The main goal of this chapter is to review the various nonlinear simulation techniques available for RF and microwave circuit analysis, and to present the basic operation and properties of those techniques. Although this is not intended to be a course on nonlinear analysis or simulation, which is far beyond the scope of this book, we believe it is still important for the reader to have a basic idea of how these simulation tools function. This way, he or she can select the most appropriate technique for their particular task and be aware of its limitations.


Although mathematical details will be kept to a minimum, they can still intimidate many microwave engineers. If this is the case, do not worry, stick to the qualitative explanations and the physical interpretations provided to describe the equations. Ultimately, this is more important than the mathematical derivations themselves.


Before we move on to discuss the details of each of the simulation techniques, it is convenient to first state a set of rules common to all of them. So, we start by defining what is meant by circuit or system simulation.


As explained in Chapter 1, the concept of system can be used in its most general form to represent either circuit elements, circuits, or complete RF systems. Therefore, we will use the notion of system whenever we want to achieve this level of generality, and we will adopt the designation of circuit element, component, circuit, or RF system otherwise.


Nowadays, circuit or RF system simulation is a process in which the behavior of a real circuit or RF system is mimicked, solving, in a digital computer, the equations that – we believe – describe the signal excitation and govern the physical operation of that circuit. Hence, the success of simulation should be evaluated comparing the real circuit or RF system behavior with the one predicted by the simulator. The simulation accuracy depends on both the adopted equations and the numerical techniques used to solve them. The set of equations that represent the behavior of each element constitutes the model of the element. The set of equations that describe how these are connected (the circuit or RF system topology) defines the model of the system. The numerical techniques, or numerical methods, used to solve these equations are known as the numerical solvers.


Contrary to linear systems, most nonlinear systems found in practice are described by complex nonlinear equations that do not have any closed form solution. Hence, there is no way to get a general understanding of the system’s behavior and we have to rely on partial views of its response to a few particular excitations. This is the underlying reason why simulation plays a much more important role in nonlinear circuit or RF system design than in its linear counterpart.


Currently, the discipline of mathematics on numerical methods is so advanced that the accuracy limitation of modern circuit simulators resides almost entirely on the models’ accuracy. But, since the circuit’s equations are often derived from nodal analysis (a statement of the universal law of charge conservation), we would not be exaggerating if we said that circuit simulation accuracy limitations must be solely attributed to the circuit components models, or to a possible misuse of the simulator (for example, expecting it to do something for which it was not intended).


Because it is the users who select the simulator, and most of the time, it is their responsibility to provide the simulator with the appropriate models, it is essential that they have, at least, a basic understanding of what the simulator does, so that they can a priori build a rough estimate of the simulation result. Otherwise, the entire simulation effort can be reduced to a frustrating “garbage in, garbage out.”


Actually, the main objective of this book is to provide the knowledge to the RF/microwave engineer that prevents falling into any of these traps. The present chapter aims at a correct use of the simulator, while the next four address the components’ models.


This chapter on simulation is organized in four major sections. Section 2.1 deals with the mathematical representation, or model, of signals and systems. Section 2.2 treats time-domain simulation tools for aperiodic and periodic regimes. Hence, its main focus is the transient analysis of SPICE-like simulators. Although Section 2.2 already addresses the steady-state periodic regime in the time-domain, via the so-called periodic steady-state, or PSS, algorithm, this is the core of the harmonic-balance, HB, frequency-domain technique of Section 2.3. Finally, Section 2.4 describes a group of algorithms especially devoted to analyze circuits and telecommunication systems excited by modulated RF carriers. From these, special attention is paid to the envelope-transient harmonic-balance, ETHB, a mixed-domain – time and frequency – simulation tool.



2.1 Mathematical Representation of Signals and Systems


As said above, simulation tools comprise the mathematical representation – equations or models – of the system whose behavior is to be predicted, and the numerical solvers for these equations. Therefore, this subsection is intended to introduce general forms for these representations. For that, we will consider the wireless transmitter depicted in Figure 2.1.





Figure 2.1 The block diagram of a wireless transmitter used to illustrate the disparate nature of circuits and signals dealt with by RF circuit designers.



2.1.1 Representation of Circuits and Systems


In the wireless transmitter example in Figure 2.1 we can recognize a very heterogeneous set of circuits such as digital signal-processing and control, mixed digital-analog, analog IF or baseband, and RF. Considering only this latter subset, we can note circuits that operate in an autonomous way, like the local oscillator, and ones that are driven, and thus present a forced behavior. In addition, we may also divide them into circuits solely composed of lumped elements (like ideal resistors, capacitors, inductors, and controlled sources) or into ones that include distributed elements (such as transmission lines, microstrip discontinuities, waveguides, etc.).



2.1.1.1 Circuits of Lumped Elements

As seen in Chapter 1, the response of dynamic circuits or systems cannot be given as a function of solely the instantaneous input, but depends also on the input past, stored as the circuit’s state. For example, in lumped circuits, i.e., circuits composed of only lumped elements, their state comprises the electric charge stored in the capacitors and the magnetic flux stored in the inductors, the so-called system’s state-variables or the state-vector. Assuming that these capacitor charges and inductor fluxes are memoryless functions of the corresponding voltages and currents (the quasi-static assumption in which accumulated charges, fluxes, or currents are assumed to be instantaneous functions of the applied fields), the state-equation is thus the mathematical rule that describes how the state-vector evolves in time. It is an ordinary differential equation, ODE, in time that expresses the time-rate of change of the state-vector, i.e., its derivative, with respect to time, as a function of the present state and excitation:


dstdt=fstxt(2.1)

where s(t) is state-vector, x(t) is the vector of inputs, or excitation-vector, (independent current and voltage sources) and f[s(t), x(t)]fstxt is a set of functions, each one derived from applying Kirchhoff’s currents law, KCL, to the nodes, and voltages law, KVL, to the branches.


With this state-equation, one can calculate the trajectory of the state-vector over time, s(t). Then, it is easy to compute the output-vector, y(t), i.e., the set of observable currents, voltages, or any algebraic operation on these (such as power, for example) from the following output-equation:



y(t) = g [s(t), x(t)]
yt=gstxt
(2.2)




Example 2.1 The State- and Output-Equations


For an illustrative example of this state- and corresponding output-equations, let us use the circuit of Figure 2.2. The application of Kirchhoff currents law to the input and output nodes, leads to


vGt−vItRs+CgdvGtdt+CbddtvGt−vDt=0CbddtvDt−vGt+iDvGvD+iLt+vDtRL=0vDt−VDD=LddiLtdt(2.3)

which can be rewritten as


dvGtdt=1CgvIt−vGtRs−iDvGvD−iLt−vDtRLdvDtdt=1CgvIt−vGtRs−Cb+CgCbCgiDvGvD+iLt+vDtRLdiLtdt=vDt−VDDLd(2.4)

Defining now s1(t) ≡ vG(t), s2(t) ≡ vD(t), s3(t) ≡ iL(t), x1(t) ≡ vI(t)s1t≡vGt,s2t≡vDt,s3t≡iLt,x1t≡vIt and x2(t) ≡ VDDx2t≡VDD allows us to express these nodal equations in the canonical state-equation form of (2.1) as


dsntdt=fns1t…snt…sNtx1t…xmt…xMt(2.5)

where N = 3, M = 2N=3,M=2 and


f1stxt=1Cgx1t−s1tRs−iDs1ts2t−s3t−s2tRLf2stxt=1Cgx1t−s1tRs−Cb+CgCbCgiDs1ts2t+s3t+s2tRLf3stxt=s2t−x2tLd(2.6)

Assuming that the desired observable is the output voltage, vO(t)vOt, the canonical output equation can be obtained following a similar procedure, defining y(t) ≡ vO(t) = vD(t) = s2(t)yt≡vOt=vDt=s2t



y(t) = g[s1(t), s2(t), s3(t), x1(t), x2(t), ]
yt=gs1ts2ts3tx1tx2t
(2.7)

which, in this case, is simply y(t) = s2(t)yt=s2t.





Figure 2.2 Simplified FET amplifier circuit (a) and its equivalent circuit (b) intended to illustrate the analysis’s algorithms. vI(t) = VGG + vS(t) where vS(t) is the excitation, VGG = 3 V, VDD = 10 V, Rs = 50 Ω, Cg = 50 pF, Cb = 4.4 pF, Ld = 120 nH, and RL = 6.6 Ω. The MOSFET has a transconductance of gm = 10 A/V, a threshold voltage of VT = 3 V, and a knee voltage of VK = 1 V.


Because this system is a third order one, its behavior is completely described by a three-dimensional state-vector, i.e., involving three state-variables. So, as explained in Section 1.4.4, its phase-portrait should be represented in a volume whose axis are s1, s2, and s3, or vG, vD, and iL, as is shown in Figure 2.3. Please note that the state-variables correspond to the expected circuit’s three independent energy storing elements: s1(t) ≡ vG(t)s1t≡vGt is a measure of the accumulated charge (electric field) in the input capacitor Cg, s2(t) ≡ vD(t)s2t≡vDt is needed to build vGD(t) = vG(t) − vD(t) = s1(t) − s2(t)vGDt=vGt−vDt=s1t−s2t, a measure of the accumulated charge in the feedback capacitor Cb, and s3(t) ≡ iL(t)s3t≡iLt stands for the accumulated magnetic flux in the output drain inductor Ld.





Figure 2.3 (a) Illustrative trajectory of the three-dimensional system state, [x y z] ≡ [vG vD iL], of our example circuit. (b) Projection of the 3D state-space trajectory onto the [vG vD] plane. (c) Projection of the 3D state-space trajectory onto the [vG iL] plane. (d) Projection of the 3D state-space trajectory onto the [vD iL] plane.


Although the phase-portrait is a parametric curve in which the evolving parameter is time, it does not allow a direct observation of the trajectory of each of the state-variables over time. In fact, a graph like Figure 2.3 (a) cannot tell us when the system is more active or, on the contrary, when it is almost latent. Such a study would require separate graphs of s1(t), s2(t) or s3(t) versus time as is exemplified in Figure 2.4 for s1(t) and s2(t).





Figure 2.4 Time evolution of two of the system’s state-variables over time. (a) s1(t) = vG (t) and (b) s2(t) = vD(t).



2.1.1.2 Circuits with Distributed Elements

Contrary to circuits that include only lumped elements, circuits or systems that involve distributed components, such as transmission lines or other distributed electromagnetic multi-ports, cannot be described as ODEs in time, requiring a description in both time and space. That is the case of the power divider/combiner, the circulator or the output microwave filter of Figure 2.1, and of the transmission line of Figure 2.5. As an example, the transmission line is described by the following partial differential equations, PDEs, in time, t, and space, x.


∂vxt∂x=−L∂ixt∂t−Rixt∂ixt∂x=−C∂vxt∂t−Gvxt(2.8)

where i(xt)ixt and v(xt)vxt are the distributed current and voltage along the line and R, L, C, and G, stand for the usual line’s distributed resistance, inductance, capacitance, and conductance per unit length, respectively.





Figure 2.5 The uniform transmission line as an example of a distributed component. Such distributed and dispersive components are represented by partial differential equations in time and space, impeding the use of the state-equation as a general model.


Please note that, because all time-domain circuit simulators are programmed to solve ODEs, they require that the circuit be represented by the state- and output-equations of (2.1) and (2.2). Consequently, a lumped representation is needed for distributed components. Such lumped representation is usually given in the form of an approximate equivalent circuit, obtained from the frequency-domain behavior observed at the device ports, which mimics the operation of the original multiport [1]. However, alternative behavioral representations, such as the one based on the impulse response function and subsequent convolution with the component’s excitation, are also possible [2].


It should be emphasized that these are behavioral representations, not real equivalent circuits, as distributed elements do not have a true lumped representation. For example, even our uniform transmission line of Figure 2.5 would need an infinite number of elementary lumped RLCG sections. Therefore, if one is willing to simulate such a circuit in the time domain, he or she should be aware that he is not simulating the actual circuit but another one that, hopefully, produces a similar result. For example, if the extracted “equivalent” circuit is not guaranteed to obey certain properties, such as passivity [3], one may end up with nonconvergence results or erroneous solutions such as a nonphysical (and so impossible) oscillatory response from an original circuit composed of only passive components.



2.1.2 Representation of Signals


As far as the signals are concerned, we could group them into baseband – or slowly evolving – information signals, fast-varying RF carriers, and modulated signals, in which the first two are combined. Furthermore, as shown in the spectra included in Figure 2.1, we could also note that, contrary to the RF carriers, which are periodic and thus narrowband, baseband signals are wideband (that span through many octaves or decades) and are necessarily aperiodic, as they carry information (nonrepeatable, by definition). Therefore, while the periodic RF carrier can be represented in either the time or frequency domains, baseband signals must be necessarily treated in the time domain since the discrete Fourier series assumes periodicity in both time and frequency. Actually, because we will be using a digital computer, i.e., a machine with a finite number of states, it will necessarily impose a discrete, or sampled and of confined nature, description of any time- or frequency-domain entities. This means that if you want to represent, in the frequency domain, a certain continuous-time signal, extending from, say, 0 to T seconds, what you will get is the frequency-domain representation of another discrete and periodic signal, of period T, which extends from –∞ to +∞, and whose sampling frequency, fs, is determined by the computer time resolution. Such a signal has a discrete and periodic spectrum whose frequency resolution is Δf  =  1/T and whose period is fs. Conversely, any continuous spectrum signal will be treated as one of discrete spectrum, which has a periodic time-domain representation. In addition, any Fourier conversion between these two domains will necessarily be executed via the discrete Fourier transform, DFT, or its fast version, the FFT, in case the number of time and frequency points is a multiple of two.



2.2 Time-Domain Circuit Analysis


This section deals with circuit analysis techniques that operate in the time domain, such as SPICE or SPICE-like transient simulators [4], [5], and the ones known for calculating the periodic steady-state in the time domain. These simulators assume that the excitation-vector is known for the period of time where the response is desired and that the circuit is represented by the state- and output-equations of (2.1) and (2.2). In addition, it is assumed that the response is desired as a waveform in the time-domain.



2.2.1 Time-Step Integration Engines


Time-step integration is used in all SPICE-like transient simulators [4], [5] but also in many system simulators [68]. Since it aims at the evolution of the circuit in time, it can be seen as the most “natural” circuit analysis technique because it closely follows the conceptual interpretation we make of the circuit operation. Actually, assuming one starts from a known excitation, x(t), discretized in a number of appropriate time-points, x(tn), and an also known, initial state, s(t0), one can easily compute the response at all future time samples, s(tn), from the discretization of the state-equation. This discretization reflects the knowledge that the derivative of the state at the time, tn, is the limit when the time-step length, hn, tends to zero, of the ratio [s(tn + 1) − s(tn)]/hnstn+1−stn/hn. Therefore, the state-equation (2.1) can be approximated by the following difference equation


stn+1−stnhn=fstnxtn(2.9)

which allows the calculation of all s(tn) using the following recursive scheme known as the forward Euler method [9]



s(tn + 1) = s(tn) + hnf[s(tn), x(tn)]
stn+1=stn+hnfstnxtn
(2.10)

in a time-step by time-step basis. It is this time-step integration of the state-equation that constitutes the root of the name of this family of methods.


Alternatively, we could have also approximated the state derivative at tn by


stn+1−stnhn=fstn+1xtn+1(2.11)

which would then lead to the alternative recursive scheme of



s(tn + 1) = s(tn) + hnf[s(tn + 1), x(tn + 1)]
stn+1=stn+hnfstn+1xtn+1
(2.12)

which is known as the backward Euler method.


Despite the obvious similarities between (2.10) and (2.12) (originated in their common genesis), they correspond to very distinct numerical methods. First of all, (2.10) gives the next state, s(tn+1), in an explicit way, since everything in its right-hand side is a priori known. On the contrary, (2.12) has an implicit formulation since the next state s(tn+1) depends on the known s(tn) and x(tn+1), but also on itself. Hence, s(tn+1) can be obtained through simple and fast algebraic operations from (2.10), while the solution of (2.12) demands for an iterative nonlinear solver, for which it is usually formulated as



s(tn + 1) − s(tn) − hnf[s(tn + 1), x(tn + 1)] = 0
stn+1−stn−hnfstn+1xtn+1=0
(2.13)

Actually, (2.13) can be solved using, e.g., a gradient method like the Newton-Raphson iteration [10].




A Brief Note on the Newton-Raphson Iteration


The Newton-Raphson iteration plays such an important role in nonlinear circuit simulation that we will outline it here. Based on a first order Taylor series approximation, it finds the zero of a nonlinear function, or vector of functions, through the solution of a successively approximated linear system of equations. In the present case, the vector of functions whose zero is to be calculated is φ[s(tn + 1)] ≡ s(tn + 1) − s(tn) − hnf[s(tn + 1), x(tn + 1)]φstn+1≡stn+1−stn−hnfstn+1xtn+1 and so its first-order (linear) Taylor approximation in the vicinity of a certain predetermined estimate, s(tn + 1)istn+1i, becomes



φ[s(tn + 1)]≈φ[s(tn + 1)i] + Jφ[s(tn + 1)i][s(tn + 1)i + 1 − s(tn + 1)i] = 0
φstn+1≈φstn+1i+Jφstn+1istn+1i+1−stn+1i=0
(2.14)

where Jφ[s(tn + 1)i]Jφstn+1i is the Jacobian matrix of φ[s(tn + 1)]φstn+1 whose element of row m and column l is calculated by


Jφstn+1iml≡∂φmstn+1∂sltn+1stn+1i(2.15)

i.e., it is the derivative of the m’th component of the vector of functions φ[s(tn + 1)]φstn+1 with respect to the l’th component of the state-vector s(tn + 1)stn+1, evaluated at the previous estimate s(tn + 1)istn+1i.


Since (2.14) is now a linear system in the unknown s(tn + 1)i + 1stn+1i+1, we can get a (hopefully) closer estimate of the desired zero by the following iteration:



s(tn + 1)i + 1 = s(tn + 1)i − [Jφ[s(tn + 1)i]]−1φ[s(tn + 1)i]
stn+1i+1=stn+1i−Jφstn+1i−1φstn+1i
(2.16)

This is graphically illustrated in Figure 2.6 (a) for the simplest one-dimensional case. Please note that, if φ[s(tn + 1)]φstn+1 were linear, its Jacobian matrix would be constant, i.e., independent of s(tn + 1)istn+1i, and the Newton-Raphson algorithm would converge in only one iteration. In general, the Newton-Raphson method is faster in presence of mildly nonlinear problems and slower – i.e., it needs more iterations – when dealing with strongly nonlinear problems (higher curvature in the function of Figure 2.6). Furthermore, it requires an initial estimate of the solution, and converges if the function φ[s(tn + 1)]φstn+1 is well behaved, i.e., if its Jacobian matrix has no points of zero determinant. In fact, if s(tn + 1)istn+1i is a point in the state-space in which the determinant of the Jacobian is zero, or close to zero, the next estimate will suffer a big jump, possibly to a point very far from the desired solution φ[s(tn + 1)] = 0φstn+1=0, even if it were already close to it at s(tn + 1)istn+1i. As illustrated now in Figure 2.6 (b), the Newton-Raphson iteration of (2.16) may diverge from the target solution even when it is already quite close to it. Alternatively, since in the vicinity of a zero determinant Jφ[s(tn + 1)i]Jφstn+1i changes sign, the iteration may be stalled, endlessly jumping from two distinct points as depicted in Figure 2.6 (c). In this case, (2.16) does not diverge, but it cannot also converge to the solution.





Figure 2.6 Graphical illustration of the Newton-Raphson iteration in the simplest one-dimensional case. (a) Normal convergence of the Newton-Raphson iteration. (b) When the function of which we want to calculate the zero, φ(s)φs, has a zero in its derivative (the Jacobian), the iteration may diverge from the desired solution. (c) Situation in which the iteration does not either diverge or converge but is kept wandering around the vicinity of a zero derivative point.


Unfortunately, the simplicity of (2.10) has a high price in numerical stability. Since it is a predictive method, its associated error tends to build up very rapidly, a problem that must be obviated by reducing the time-step, and thus increasing the overall simulation time. In practice, this increase in simulation time and the potential numerical instability are so significant, when compared to (2.12), that (2.10) is hardly ever used in practical circuit simulators.


Actually, (2.10) and (2.12) do not constitute the only way that we could have used to integrate the state-equation. They are just examples of a zero-order method in which the derivative is assumed constant during the time-step interval. For example, if a first-order integration (known as the trapezoidal integration rule) would have been used, we would obtain [8]


stn+1=stn+hnfstn+1xtn+1+fstnxtn2(2.17)

In general, the time-step integration scheme is formulated as


stn+1=stn+∫tntn+1fstxtdt(2.18)

from which the 4th-order Runge-Kutta method [8] is one of its most popular implementations (see Exercise 2.1).


Finally, it should be added that there is no need to keep the time-step, hn, constant. In fact, all modern time-domain simulators use dynamic time-steps, measuring the variation in time of s(t). Larger time-steps are used when s(t) is slowly varying, while smaller ones are adopted in zones of fast change of state. This way, a good compromise between accuracy and computation time is obtained.




Example 2.2 The Time-Step Integration Formulation


As an example of time-step integration, the trapezoidal rule discretization of our circuit example described by (2.5) and (2.6) would lead to


s1tn+1=s1tn+hnf1tn+1+f1tn2s2tn+1=s2tn+hnf2tn+1+f2tn2s3tn+1=s3tn+hnf3tn+1+f3tn2(2.19)

where the three f(tn)ftn were defined in (2.6).


For each time step, tn, (2.19) is a nonlinear system of three equations in three unknowns, s1(tn+1), s2(tn+1) and s3(tn+1), which can be solved by the Newton-Raphson iteration of (2.16) making


φstn+1=s1tn+1−s1tn−hn2f1tn+1+f1tns2tn+1−s2tn−hn2f2tn+1+f2tns3tn+1−s3tn−hn2f3tn+1+f3tn=0(2.20a)

or in a compacted form


φstn+1=stn+1−stn−hn2ftn+1+ftn=0(2.20b)

and


Jφstn+1=1−hn2∂f1tn+1∂s1tn+1−hn2∂f1tn+1∂s2tn+1−hn2∂f1tn+1∂s3tn+1−hn2∂f2tn+1∂s1tn+11−hn2∂f2tn+1∂s2tn+1−hn2∂f2tn+1∂s3tn+1−hn2∂f3tn+1∂s1tn+1−hn2∂f3tn+1∂s2tn+11−hn2∂f3tn+1∂s3tn+1

=1+hn2Cg1Rs+gmtn+1hn2Cg1RL+gdstn+1hn2Cghn2Cg1Rs+Cb+CgCbgmtn+11+hnCb+Cg2CgCb1RL+gdstn+1hnCb+Cg2CgCb0−hn2Ld1(2.21a)

or


Jφstn+1=I−hn2∂ftn+1∂stn+1(2.21b)

where I stands for the identity matrix, and gm(t) and gds(t) are, respectively, the FET’s transcoductance and output conductance, defined by gmt=∂iDvGvD∂vGvGt,vDt and gdst=∂iDvGvD∂vDvGt,vDt.


Figures 2.3 and 2.4 depict the results of the simulation of (2.16) for an excitation of vI(t) = VGG + vS(t) = 3 + 20 cos (2πfct)vIt=VGG+vSt=3+20cos2πfct, fc = 1 GHz, and with s1(0) = 3.0 V, s2(0) = 10.0 V and s3(0) = −10/6.6 A initial conditions. This time-step integration used the trapezoidal rule of (2.19) and a fixed time step of h = 1/(100fc).


If the frequency-domain response representation is desired, as is quite common in RF and microwave circuit analysis and design, then a DFT must be performed on the time-domain response, which requires that the signal is uniformly sampled and periodic.


Unfortunately, and as we have seen, simulation speed determines the use of dynamic time-steps, which produces a non-uniformly sampled response. So, either one forces the simulator to use a fixed time-step, e.g., using the simulator’s time-step ceiling option – turning the time-domain simulation highly inefficient – or relies on the hidden interpolation and resampling functions operated before the DFT is applied, which are not error free – turning the frequency-domain representation potentially inaccurate.


If the signal is not periodic, then the DFT will make it periodic, with a period equal to the simulated time, producing an error that is visible as spectral leakage [10], i.e., in which the energy of individual spectral lines is distributed onto (leaks to) their vicinity. This error is caused by the time-domain signal discontinuities that may exist in the beginning and end points of the constructed periodic signal and is illustrated in Figure 2.7. It is particularly harmful in distortion analysis where the distortion components may be totally immersed in the spectral leakage error. Hence, it is quite common to adopt measures to mitigate spectral leakage, increasing the simulation time (in order to reduce the relative weight of these discontinuities in the overall signal) or to simply eliminate these discontinuities, forcing the output signal to start and end in the same zero value – y(0) = y(T) = 0 – multiplying y(t) by an appropriate window function [10].





Figure 2.7 Illustrative example of the spectral leakage generated when an aperiodic time-domain function is (discrete) Fourier transformed to the frequency domain. Please note the spectral leakage manifested as erroneous spectral power around the desired signal and its mitigation by windowing. In this example, VdTω is the frequency-domain representation of five periods of the result of our transient simulation, vD(t), collected between 50 ns and 55 ns, while VdHω is the corresponding signal passed through a Hanning window [10]. The dotted spectrum, VdSω, is the same five periods but now collected after the steady-state was reached, i.e., collected between 380 ns and 385 ns.


This lack of periodicity of the output signal can be due to the nature of the problem at hand. For example, a two-tone signal of f1 and f2 is only periodic if its central frequency fc = (f1 + f2)/2 is a multiple of its frequency separation Δf = f2 − f1. But it may also be due to the presence of the transient response. In such a case, it is necessary to truncate the signal to its last few periods. This process is usually quite frustrating when simulating RF circuits since the length of this annoying transient response can be many orders of magnitude higher than the length of the desired period, or of a few periods. Indeed, the length of these long transients is usually determined by the time-constants associated to the large dc decoupling capacitors and RF chokes of bias networks.


Therefore, this transient time-domain simulation of microwave and RF circuits tends to be either inefficient, inaccurate, or both, which motivated the development of analysis techniques specifically devoted to calculate the periodic steady-state.



2.2.2 Periodic Steady-State Analysis in the Time-Domain


Although one might think that the integration of the transient response is unavoidable, as it corresponds to the way the real circuit behaves, that is not necessarily the case. To understand why, let us recall the previously discussed problem that consists in determining the response of a forced system to a periodic excitation of period T. To guarantee that this problem is solvable, let us also consider that the steady-state response is, indeed, periodic, of the same period T of the excitation.


Obviously, the complete response starts with the aperiodic transient, which is then followed by the periodic steady-state. This transient is naturally governed by the system, the excitation and the assumed initial conditions, or initial state, while the periodic steady-state is only dependent on the system and the excitation (in fact, its period is solely determined by the excitation). Therefore, different initial states lead to distinct transient responses, but always to the same steady-state. Actually, a close observation of Figures 2.3 and 2.4 indicates that different initial states may lead to transients of very different length, depending on whether we start from an initial state that is close, or very far, from the final cyclic orbit that describes the periodic steady-state. In fact, the notion of state tells us that all states of that cyclic orbit keep the system in the same orbit, i.e., lead to a transient response of zero length or, in other words, a response without transient.




Example 2.3 A Time-Step Integration Simulation that Does Not Exhibit a Transient


Although the mere existence of a transient simulation without a transient may sound absurd, you can convince yourself that this can indeed be true by running the following example. Pick up any lumped-element circuit you know, which, being excited by a periodic stimulus, produces a periodic steady-state response. Perform a normal transient simulation using a SPICE-like circuit simulator for a time that is sufficiently long to let the circuit reach its periodic steady-state. You will naturally see a transient response followed by the periodic steady-state. Analyze this steady-state response, observing and recording the values of all state-variables, for example all capacitor voltages and inductor currents, at the beginning (arbitrary) of one period of the excitation. Now, run the same transient simulation but use the simulator option that allows you to set the initial values of all energy-storage elements to the values previously recorded (use all precision digits the simulator provides). You can run the transient simulator for a much smaller time-span as you will see that the response is completely periodic, i.e., that there is no transient response.


We actually performed this experiment in our circuit example, starting with the same initial conditions and excitation used for our previous transient simulation, i.e. vG(0) = 3.0 V, vD(0) = 10.0 V, iL(0) = −10/6.6 A and vI(t) = 3 + 20 cos (2π109t)vIt=3+20cos2π109t, measured these variables at t = 384 ns, and then used these values [vG(384 ns) = 3.118026263640600 V, vD(0) = 11.183192073318860 V and iL(0) = −2.625494710083924 A] as the new initial conditions. As expected, the simulator response went directly to the steady-state response of Figures 2.3 and 2.4 without any transient.


By the way, this example leads us to this very curious question: which were, after all, the energy-storage initial conditions used by default by the simulator in the first run? And how did it select them? You will find a response to these questions in the next section.


Now that we are convinced that our problem of bypassing the transient response may have a solution, it becomes restricted to find a way to determine these appropriate initial states (see Exercise 2.2). There are, at least, three known techniques to do that.


The first method we can imagine to determine the proper initial states (i.e., the ones that are already contained in the cyclic orbit) was already discussed. Indeed, although the time-step integration starts from an initial state that may be very far from the desired one, the state it reaches after one period, is closer. Then, this new updated state is inherently used as the initial state for the time-step integration of the second period, from which a new update is found, in a process that we know is convergent to the desired proper initial state. Our problem is that this process is usually too slow due to the large dc decoupling bias network capacitors and RF chokes. In an attempt to speed up the transient calculation, a simple dc analysis (in which all capacitors and inductors are treated as open or short circuits, respectively) is performed prior to any transient simulation. Then, the dc open circuit voltages across the capacitors and the short-circuit currents across the inductors are used as the initial conditions of the corresponding voltages and currents in the actual transient simulation, unless other initial conditions are imposed by the user. These dc values constitute, therefore, the default initial state that the simulator uses. When these dc quiescent values are already very close to the actual dc bias values, as is the case of all linear or quasi-linear circuits, such as small-signal or class A power amplifiers, the transient response is short and this method of finding the default initial conditions can be very effective. Otherwise, i.e., in case strongly nonlinear circuits such as class C amplifiers or rectifiers are to be simulated, these default values are still very far from the actual bias point and a large transient response subsists.


The second and third methods find the proper initial condition using a completely different strategy: they change the nature of the numerical problem.



2.2.2.1 Initial-Value and Boundary-Value Problems

Up to now, we have considered that our circuit analysis could be formulated as an initial-value problem, i.e., a problem defined by setting initial boundary conditions (the initial state) and a rule for the time evolution of the other internal time-steps (the state-equation). No boundary conditions are defined at the end of the simulation time; as we know, this simulation time can be arbitrarily set by the user. When we are willing to determine the periodic steady-state, and we know a priori that the steady-state period is the period of the excitation, we can also define a final boundary condition and reformulate the analysis as a boundary-value problem. In this case the initial state, the internal rule, and the final state are all a priori defined. Well, rigorously speaking, what is defined is the internal rule and an equality constraint for the initial and the final states: s(t0 + T) = s(t0).



2.2.2.2 Finite-Differences in Time-Domain

Having this boundary-value problem in mind, the second method is known as the finite-differences in time-domain and is directly formulated from the discretization of the state-equation and the periodic boundary condition s(t0 + T) = s(t0). Using again a zero-order backward Euler discretization in N + 1 time-samples, we can define the following (incomplete) system (for each state-variable or each circuit node) of N equations in N + 1 unknowns


st1=st0+h0fst1xt1⋮stn+1=stn+hnfstn+1xtn+1⋮stN=stN−1+hNfstNxtN(2.22)

which, augmented by the boundary equation s(tN) = s(t0), constitutes a (now complete) system of N + 1 equations in N + 1 unknowns that can be readily solved through a N + 1-dimensional Newton-Raphson iteration scheme.



2.2.2.3 Shooting Method

The third alternative to find the proper initial condition that leads to the desired steady-state is known as the shooting or periodic steady-state, PSS, method. Conceptually, it can be described as a trial-and-error process in which we start by simulating the circuit for one period with some initial state guess; for example, the one given by an a priori dc analysis, s(t0)1st01. In our circuit example, this attempt would lead to the first set of trajectories depicted in the phase-space plots of Figure 2.8, in which s(tN)1s(t0)1stN1≠st01, indicating, as expected, that we failed the periodic steady-state. By slightly changing the tested initial condition by some Δs(t0)Δst0, say s(t0)2 = s(t0)1 + Δs(t0)st02=st01+Δst0, we can try the alternative attempt identified as the second set of trajectories of Figure 2.8. Although this one led to another failure, since s(tN)2stN2 is still different from s(t0)2st02, it may provide us an idea of the sensitivity of the final condition to the initial condition, Δs(tN)/Δs(t0)ΔstN/Δst0, which we then may use to direct our search for better trial s(t0)3st03, s(t0)3st03, …, s(t0)Kst0K, until the desired steady-state is achieved, i.e., in which s(tN)K = s(t0)KstNK=st0K.





Figure 2.8 Illustration of the shooting-Newton method of our circuit example excited by vI(t) = 3 + 20 cos (2π109t)vIt=3+20cos2π109t, where all the required five Newton iterations are shown. As done for the previous transient analysis, we started from the initial conditions derived from dc analysis, vG(0) = 3.0 V, vD(0) = 10.0 V and iL(0) = −10/6.6 A, and ended up on vG(0) = 3.118026121336410 V, vD(0) = 11.183189430567937 V and iL(0) = −2.625492967125298 A.

Only gold members can continue reading. Log In or Register to continue

Mar 16, 2021 | Posted by in Circuit Design, Theory and Analysis | Comments Off on 2 – Basic Nonlinear Microwave Circuit Analysis Techniques
Premium Wordpress Themes by UFO Themes