The main goal of this chapter is to review the various nonlinear simulation techniques available for RF and microwave circuit analysis, and to present the basic operation and properties of those techniques. Although this is not intended to be a course on nonlinear analysis or simulation, which is far beyond the scope of this book, we believe it is still important for the reader to have a basic idea of how these simulation tools function. This way, he or she can select the most appropriate technique for their particular task and be aware of its limitations.

Although mathematical details will be kept to a minimum, they can still intimidate many microwave engineers. If this is the case, do not worry, stick to the qualitative explanations and the physical interpretations provided to describe the equations. Ultimately, this is more important than the mathematical derivations themselves.

Before we move on to discuss the details of each of the simulation techniques, it is convenient to first state a set of rules common to all of them. So, we start by defining what is meant by circuit or system simulation.

As explained in Chapter 1, the concept of system can be used in its most general form to represent either circuit elements, circuits, or complete RF systems. Therefore, we will use the notion of system whenever we want to achieve this level of generality, and we will adopt the designation of circuit element, component, circuit, or RF system otherwise.

Nowadays, circuit or RF system simulation is a process in which the behavior of a real circuit or RF system is mimicked, solving, in a digital computer, the equations that – we believe – describe the signal excitation and govern the physical operation of that circuit. Hence, the success of simulation should be evaluated comparing the real circuit or RF system behavior with the one predicted by the simulator. The simulation accuracy depends on both the adopted equations and the numerical techniques used to solve them. The set of equations that represent the behavior of each element constitutes the model of the element. The set of equations that describe how these are connected (the circuit or RF system topology) defines the model of the system. The numerical techniques, or numerical methods, used to solve these equations are known as the numerical solvers.

Contrary to linear systems, most nonlinear systems found in practice are described by complex nonlinear equations that do not have any closed form solution. Hence, there is no way to get a general understanding of the system’s behavior and we have to rely on partial views of its response to a few particular excitations. This is the underlying reason why simulation plays a much more important role in nonlinear circuit or RF system design than in its linear counterpart.

Currently, the discipline of mathematics on numerical methods is so advanced that the accuracy limitation of modern circuit simulators resides almost entirely on the models’ accuracy. But, since the circuit’s equations are often derived from nodal analysis (a statement of the universal law of charge conservation), we would not be exaggerating if we said that circuit simulation accuracy limitations must be solely attributed to the circuit components models, or to a possible misuse of the simulator (for example, expecting it to do something for which it was not intended).

Because it is the users who select the simulator, and most of the time, it is their responsibility to provide the simulator with the appropriate models, it is essential that they have, at least, a basic understanding of what the simulator does, so that they can a priori build a rough estimate of the simulation result. Otherwise, the entire simulation effort can be reduced to a frustrating “garbage in, garbage out.”

Actually, the main objective of this book is to provide the knowledge to the RF/microwave engineer that prevents falling into any of these traps. The present chapter aims at a correct use of the simulator, while the next four address the components’ models.

This chapter on simulation is organized in four major sections. Section 2.1 deals with the mathematical representation, or model, of signals and systems. Section 2.2 treats time-domain simulation tools for aperiodic and periodic regimes. Hence, its main focus is the transient analysis of SPICE-like simulators. Although Section 2.2 already addresses the steady-state periodic regime in the time-domain, via the so-called periodic steady-state, or PSS, algorithm, this is the core of the harmonic-balance, HB, frequency-domain technique of Section 2.3. Finally, Section 2.4 describes a group of algorithms especially devoted to analyze circuits and telecommunication systems excited by modulated RF carriers. From these, special attention is paid to the envelope-transient harmonic-balance, ETHB, a mixed-domain – time and frequency – simulation tool.

### 2.1 Mathematical Representation of Signals and Systems

As said above, simulation tools comprise the mathematical representation – equations or models – of the system whose behavior is to be predicted, and the numerical solvers for these equations. Therefore, this subsection is intended to introduce general forms for these representations. For that, we will consider the wireless transmitter depicted in Figure 2.1.

Figure 2.1 The block diagram of a wireless transmitter used to illustrate the disparate nature of circuits and signals dealt with by RF circuit designers.

#### 2.1.1 Representation of Circuits and Systems

In the wireless transmitter example in Figure 2.1 we can recognize a very heterogeneous set of circuits such as digital signal-processing and control, mixed digital-analog, analog IF or baseband, and RF. Considering only this latter subset, we can note circuits that operate in an *autonomous* way, like the local oscillator, and ones that are *driven*, and thus present a *forced* behavior. In addition, we may also divide them into circuits solely composed of *lumped* elements (like ideal resistors, capacitors, inductors, and controlled sources) or into ones that include *distributed* elements (such as transmission lines, microstrip discontinuities, waveguides, etc.).

##### 2.1.1.1 Circuits of Lumped Elements

As seen in Chapter 1, the response of dynamic circuits or systems cannot be given as a function of solely the instantaneous input, but depends also on the input past, stored as the circuit’s state. For example, in lumped circuits, i.e., circuits composed of only lumped elements, their state comprises the electric charge stored in the capacitors and the magnetic flux stored in the inductors, the so-called system’s *state-variables* or the *state-vector*. Assuming that these capacitor charges and inductor fluxes are memoryless functions of the corresponding voltages and currents (the quasi-static assumption in which accumulated charges, fluxes, or currents are assumed to be instantaneous functions of the applied fields), the *state-equation* is thus the mathematical rule that describes how the state-vector evolves in time. It is an ordinary differential equation, ODE, in time that expresses the time-rate of change of the state-vector, i.e., its derivative, with respect to time, as a function of the present state and excitation:

where **s**(*t*) is state-vector, **x**(*t*) is the vector of inputs, or *excitation-vector*, (independent current and voltage sources) and **f**[**s**(*t*), **x**(*t*)]fstxt is a set of functions, each one derived from applying Kirchhoff’s currents law, KCL, to the nodes, and voltages law, KVL, to the branches.

With this state-equation, one can calculate the trajectory of the state-vector over time, **s**(*t*). Then, it is easy to compute the *output-vector*, **y**(*t*), i.e., the set of observable currents, voltages, or any algebraic operation on these (such as power, for example) from the following *output-equation*:

**y**(

*t*) =

**g**[

**s**(

*t*),

**x**(

*t*)]

For an illustrative example of this state- and corresponding output-equations, let us use the circuit of Figure 2.2. The application of Kirchhoff currents law to the input and output nodes, leads to

which can be rewritten as

Defining now *s*_{1}(*t*) ≡ *v*_{G}(*t*), *s*_{2}(*t*) ≡ *v*_{D}(*t*), *s*_{3}(*t*) ≡ *i*_{L}(*t*), *x*_{1}(*t*) ≡ *v*_{I}(*t*)s1t≡vGt,s2t≡vDt,s3t≡iLt,x1t≡vIt and *x*_{2}(*t*) ≡ *V*_{DD}x2t≡VDD allows us to express these nodal equations in the canonical state-equation form of (2.1) as

where *N* = 3, *M* = 2N=3,M=2 and

Assuming that the desired observable is the output voltage, *v*_{O}(*t*)vOt, the canonical output equation can be obtained following a similar procedure, defining *y*(*t*) ≡ *v*_{O}(*t*) = *v*_{D}(*t*) = *s*_{2}(*t*)yt≡vOt=vDt=s2t

*y*(

*t*) =

*g*[

*s*

_{1}(

*t*),

*s*

_{2}(

*t*),

*s*

_{3}(

*t*),

*x*

_{1}(

*t*),

*x*

_{2}(

*t*), ]

which, in this case, is simply *y*(*t*) = *s*_{2}(*t*)yt=s2t.

Figure 2.2 Simplified FET amplifier circuit (a) and its equivalent circuit (b) intended to illustrate the analysis’s algorithms. *v*_{I}(*t*) = *V*_{GG} + *v*_{S}(*t*) where *v*_{S}(*t*) is the excitation, *V*_{GG} = 3 V, *V*_{DD} = 10 V, *R*_{s} = 50 Ω, *C*_{g} = 50 pF, *C*_{b} = 4.4 pF, *L*_{d} = 120 nH, and *R*_{L} = 6.6 Ω. The MOSFET has a transconductance of *g*_{m} = 10 A/V, a threshold voltage of *V*_{T} = 3 V, and a knee voltage of *V*_{K} = 1 V.

Because this system is a third order one, its behavior is completely described by a three-dimensional state-vector, i.e., involving three state-variables. So, as explained in Section 1.4.4, its phase-portrait should be represented in a volume whose axis are *s*_{1}, *s*_{2}, and *s*_{3}, or *v*_{G}, *v*_{D,} and *i*_{L}, as is shown in Figure 2.3. Please note that the state-variables correspond to the expected circuit’s three independent energy storing elements: *s*_{1}(*t*) ≡ *v*_{G}(*t*)s1t≡vGt is a measure of the accumulated charge (electric field) in the input capacitor *C*_{g}, *s*_{2}(*t*) ≡ *v*_{D}(*t*)s2t≡vDt is needed to build *v*_{GD}(*t*) = *v*_{G}(*t*) − *v*_{D}(*t*) = *s*_{1}(*t*) − *s*_{2}(*t*)vGDt=vGt−vDt=s1t−s2t, a measure of the accumulated charge in the feedback capacitor *C*_{b}, and *s*_{3}(*t*) ≡ *i*_{L}(*t*)s3t≡iLt stands for the accumulated magnetic flux in the output drain inductor *L*_{d}.

Figure 2.3 (a) Illustrative trajectory of the three-dimensional system state, [*x y z*] ≡ [*v*_{G} *v*_{D} *i*_{L}], of our example circuit. (b) Projection of the 3D state-space trajectory onto the [*v*_{G} *v*_{D}] plane. (c) Projection of the 3D state-space trajectory onto the [*v*_{G} *i*_{L}] plane. (d) Projection of the 3D state-space trajectory onto the [*v*_{D} *i*_{L}] plane.

Although the phase-portrait is a parametric curve in which the evolving parameter is time, it does not allow a direct observation of the trajectory of each of the state-variables over time. In fact, a graph like Figure 2.3 (a) cannot tell us when the system is more active or, on the contrary, when it is almost latent. Such a study would require separate graphs of *s*_{1}(*t*), *s*_{2}(*t*) or *s*_{3}(*t*) versus time as is exemplified in Figure 2.4 for *s*_{1}(*t*) and *s*_{2}(*t*).

Figure 2.4 Time evolution of two of the system’s state-variables over time. (a) *s*_{1}(*t*) = *v*_{G} (*t*) and (b) *s*_{2}(*t*) = *v*_{D}(*t*).

##### 2.1.1.2 Circuits with Distributed Elements

Contrary to circuits that include only lumped elements, circuits or systems that involve distributed components, such as transmission lines or other distributed electromagnetic multi-ports, cannot be described as ODEs in time, requiring a description in both time and space. That is the case of the power divider/combiner, the circulator or the output microwave filter of Figure 2.1, and of the transmission line of Figure 2.5. As an example, the transmission line is described by the following partial differential equations, PDEs, in time, *t*, and space, *x*.

where *i*(*x*, *t*)ixt and *v*(*x*, *t*)vxt are the distributed current and voltage along the line and *R*, *L*, *C*, and *G*, stand for the usual line’s distributed resistance, inductance, capacitance, and conductance per unit length, respectively.

Please note that, because all time-domain circuit simulators are programmed to solve ODEs, they require that the circuit be represented by the state- and output-equations of (2.1) and (2.2). Consequently, a lumped representation is needed for distributed components. Such lumped representation is usually given in the form of an approximate equivalent circuit, obtained from the frequency-domain behavior observed at the device ports, which mimics the operation of the original multiport [1]. However, alternative behavioral representations, such as the one based on the impulse response function and subsequent convolution with the component’s excitation, are also possible [2].

It should be emphasized that these are behavioral representations, not real equivalent circuits, as distributed elements do not have a true lumped representation. For example, even our uniform transmission line of Figure 2.5 would need an infinite number of elementary lumped RLCG sections. Therefore, if one is willing to simulate such a circuit in the time domain, he or she should be aware that he is not simulating the actual circuit but another one that, hopefully, produces a similar result. For example, if the extracted “equivalent” circuit is not guaranteed to obey certain properties, such as passivity [3], one may end up with nonconvergence results or erroneous solutions such as a nonphysical (and so impossible) oscillatory response from an original circuit composed of only passive components.

#### 2.1.2 Representation of Signals

As far as the signals are concerned, we could group them into baseband – or slowly evolving – information signals, fast-varying RF carriers, and modulated signals, in which the first two are combined. Furthermore, as shown in the spectra included in Figure 2.1, we could also note that, contrary to the RF carriers, which are periodic and thus narrowband, baseband signals are wideband (that span through many octaves or decades) and are necessarily aperiodic, as they carry information (nonrepeatable, by definition). Therefore, while the periodic RF carrier can be represented in either the time or frequency domains, baseband signals must be necessarily treated in the time domain since the discrete Fourier series assumes periodicity in both time and frequency. Actually, because we will be using a digital computer, i.e., a machine with a finite number of states, it will necessarily impose a discrete, or sampled and of confined nature, description of any time- or frequency-domain entities. This means that if you want to represent, in the frequency domain, a certain continuous-time signal, extending from, say, 0 to *T* seconds, what you will get is the frequency-domain representation of another discrete and periodic signal, of period *T*, which extends from –∞ to +∞, and whose sampling frequency, *f*_{s}, is determined by the computer time resolution. Such a signal has a discrete and periodic spectrum whose frequency resolution is *Δf* = 1/*T* and whose period is *f*_{s}. Conversely, any continuous spectrum signal will be treated as one of discrete spectrum, which has a periodic time-domain representation. In addition, any Fourier conversion between these two domains will necessarily be executed via the discrete Fourier transform, DFT, or its fast version, the FFT, in case the number of time and frequency points is a multiple of two.

### 2.2 Time-Domain Circuit Analysis

This section deals with circuit analysis techniques that operate in the time domain, such as *SPICE* or SPICE-like transient simulators [4], [5], and the ones known for calculating the periodic steady-state in the time domain. These simulators assume that the excitation-vector is known for the period of time where the response is desired and that the circuit is represented by the state- and output-equations of (2.1) and (2.2). In addition, it is assumed that the response is desired as a waveform in the time-domain.

#### 2.2.1 Time-Step Integration Engines

Time-step integration is used in all SPICE-like *transient simulators* [4], [5] but also in many system simulators [6–8]. Since it aims at the evolution of the circuit in time, it can be seen as the most “natural” circuit analysis technique because it closely follows the conceptual interpretation we make of the circuit operation. Actually, assuming one starts from a known excitation, **x**(*t*), discretized in a number of appropriate time-points, **x**(*t*_{n}), and an also known, initial state, **s**(*t*_{0}), one can easily compute the response at all future time samples, **s**(*t*_{n}), from the discretization of the state-equation. This discretization reflects the knowledge that the derivative of the state at the time, *t*_{n}, is the limit when the time-step length, *h*_{n}, tends to zero, of the ratio [**s**(*t*_{n + 1}) − **s**(*t*_{n})]/*h*_{n}stn+1−stn/hn. Therefore, the state-equation (2.1) can be approximated by the following difference equation

which allows the calculation of all **s**(*t*_{n}) using the following recursive scheme known as the forward Euler method [9]

**s**(

*t*

_{n + 1}) =

**s**(

*t*

_{n}) +

*h*

_{n}

**f**[

**s**(

*t*

_{n}),

**x**(

*t*

_{n})]

in a time-step by time-step basis. It is this time-step integration of the state-equation that constitutes the root of the name of this family of methods.

Alternatively, we could have also approximated the state derivative at *t*_{n} by

which would then lead to the alternative recursive scheme of

**s**(

*t*

_{n + 1}) =

**s**(

*t*

_{n}) +

*h*

_{n}

**f**[

**s**(

*t*

_{n + 1}),

**x**(

*t*

_{n + 1})]

which is known as the backward Euler method.

Despite the obvious similarities between (2.10) and (2.12) (originated in their common genesis), they correspond to very distinct numerical methods. First of all, (2.10) gives the next state, **s**(*t*_{n+1}), in an explicit way, since everything in its right-hand side is a priori known. On the contrary, (2.12) has an implicit formulation since the next state **s**(*t*_{n+1}) depends on the known **s**(*t*_{n}) and **x**(*t*_{n+1}), but also on itself. Hence, **s**(*t*_{n+1}) can be obtained through simple and fast algebraic operations from (2.10), while the solution of (2.12) demands for an iterative nonlinear solver, for which it is usually formulated as

**s**(

*t*

_{n + 1}) −

**s**(

*t*

_{n}) −

*h*

_{n}

**f**[

**s**(

*t*

_{n + 1}),

**x**(

*t*

_{n + 1})] = 0

Actually, (2.13) can be solved using, e.g., a gradient method like the *Newton-Raphson iteration* [10].

The Newton-Raphson iteration plays such an important role in nonlinear circuit simulation that we will outline it here. Based on a first order Taylor series approximation, it finds the zero of a nonlinear function, or vector of functions, through the solution of a successively approximated linear system of equations. In the present case, the vector of functions whose zero is to be calculated is **φ**[**s**(*t*_{n + 1})] ≡ **s**(*t*_{n + 1}) − **s**(*t*_{n}) − *h*_{n}**f**[**s**(*t*_{n + 1}), **x**(*t*_{n + 1})]φstn+1≡stn+1−stn−hnfstn+1xtn+1 and so its first-order (linear) Taylor approximation in the vicinity of a certain predetermined estimate, **s**(*t*_{n + 1})_{i}stn+1i, becomes

**φ**[

**s**(

*t*

_{n + 1})]≈

**φ**[

**s**(

*t*

_{n + 1})

_{i}] +

**J**

_{φ}[

**s**(

*t*

_{n + 1})

_{i}][

**s**(

*t*

_{n + 1})

_{i + 1}−

**s**(

*t*

_{n + 1})

_{i}] = 0

where **J**_{φ}[**s**(*t*_{n + 1})_{i}]Jφstn+1i is the Jacobian matrix of **φ**[**s**(*t*_{n + 1})]φstn+1 whose element of row *m* and column *l* is calculated by

i.e., it is the derivative of the *m*’th component of the vector of functions **φ**[**s**(*t*_{n + 1})]φstn+1 with respect to the *l*’th component of the state-vector **s**(*t*_{n + 1})stn+1, evaluated at the previous estimate **s**(*t*_{n + 1})_{i}stn+1i.

Since (2.14) is now a linear system in the unknown **s**(*t*_{n + 1})_{i + 1}stn+1i+1, we can get a (hopefully) closer estimate of the desired zero by the following iteration:

**s**(

*t*

_{n + 1})

_{i + 1}=

**s**(

*t*

_{n + 1})

_{i}− [

**J**

_{φ}[

**s**(

*t*

_{n + 1})

_{i}]]

^{−1}

**φ**[

**s**(

*t*

_{n + 1})

_{i}]

This is graphically illustrated in Figure 2.6 (a) for the simplest one-dimensional case. Please note that, if **φ**[**s**(*t*_{n + 1})]φstn+1 were linear, its Jacobian matrix would be constant, i.e., independent of **s**(*t*_{n + 1})_{i}stn+1i, and the Newton-Raphson algorithm would converge in only one iteration. In general, the Newton-Raphson method is faster in presence of mildly nonlinear problems and slower – i.e., it needs more iterations – when dealing with strongly nonlinear problems (higher curvature in the function of Figure 2.6). Furthermore, it requires an initial estimate of the solution, and converges if the function **φ**[**s**(*t*_{n + 1})]φstn+1 is well behaved, i.e., if its Jacobian matrix has no points of zero determinant. In fact, if **s**(*t*_{n + 1})_{i}stn+1i is a point in the state-space in which the determinant of the Jacobian is zero, or close to zero, the next estimate will suffer a big jump, possibly to a point very far from the desired solution **φ**[**s**(*t*_{n + 1})] = 0φstn+1=0, even if it were already close to it at **s**(*t*_{n + 1})_{i}stn+1i. As illustrated now in Figure 2.6 (b), the Newton-Raphson iteration of (2.16) may diverge from the target solution even when it is already quite close to it. Alternatively, since in the vicinity of a zero determinant **J**_{φ}[**s**(*t*_{n + 1})_{i}]Jφstn+1i changes sign, the iteration may be stalled, endlessly jumping from two distinct points as depicted in Figure 2.6 (c). In this case, (2.16) does not diverge, but it cannot also converge to the solution.

Figure 2.6 Graphical illustration of the Newton-Raphson iteration in the simplest one-dimensional case. (a) Normal convergence of the Newton-Raphson iteration. (b) When the function of which we want to calculate the zero, *φ*(*s*)φs, has a zero in its derivative (the Jacobian), the iteration may diverge from the desired solution. (c) Situation in which the iteration does not either diverge or converge but is kept wandering around the vicinity of a zero derivative point.

Unfortunately, the simplicity of (2.10) has a high price in numerical stability. Since it is a predictive method, its associated error tends to build up very rapidly, a problem that must be obviated by reducing the time-step, and thus increasing the overall simulation time. In practice, this increase in simulation time and the potential numerical instability are so significant, when compared to (2.12), that (2.10) is hardly ever used in practical circuit simulators.

Actually, (2.10) and (2.12) do not constitute the only way that we could have used to integrate the state-equation. They are just examples of a zero-order method in which the derivative is assumed constant during the time-step interval. For example, if a first-order integration (known as the trapezoidal integration rule) would have been used, we would obtain [8]

In general, the time-step integration scheme is formulated as

from which the 4th-order Runge-Kutta method [8] is one of its most popular implementations (see Exercise 2.1).

Finally, it should be added that there is no need to keep the time-step, *h*_{n}, constant. In fact, all modern time-domain simulators use dynamic time-steps, measuring the variation in time of s(*t*). Larger time-steps are used when s(*t*) is slowly varying, while smaller ones are adopted in zones of fast change of state. This way, a good compromise between accuracy and computation time is obtained.

As an example of time-step integration, the trapezoidal rule discretization of our circuit example described by (2.5) and (2.6) would lead to

where the three *f*(*t*_{n})ftn were defined in (2.6).

For each time step, *t*_{n}, (2.19) is a nonlinear system of three equations in three unknowns, *s*_{1}(*t*_{n+1}), *s*_{2}(*t*_{n+1}) and *s*_{3}(*t*_{n+1}), which can be solved by the Newton-Raphson iteration of (2.16) making

or in a compacted form

and

or

where **I** stands for the identity matrix, and *g*_{m}(*t*) and *g*_{ds}(*t*) are, respectively, the FET’s transcoductance and output conductance, defined by gmt=∂iDvGvD∂vGvGt,vDt and gdst=∂iDvGvD∂vDvGt,vDt.

Figures 2.3 and 2.4 depict the results of the simulation of (2.16) for an excitation of *v*_{I}(*t*) = *V*_{GG} + *v*_{S}(*t*) = 3 + 20 cos (2*πf*_{c}*t*)vIt=VGG+vSt=3+20cos2πfct, *f*_{c} = 1 GHz, and with *s*_{1}(0) = 3.0 V, *s*_{2}(0) = 10.0 V and *s*_{3}(0) = −10/6.6 A initial conditions. This time-step integration used the trapezoidal rule of (2.19) and a fixed time step of *h* = 1/(100*f*_{c}).

If the frequency-domain response representation is desired, as is quite common in RF and microwave circuit analysis and design, then a DFT must be performed on the time-domain response, which requires that the signal is uniformly sampled and periodic.

Unfortunately, and as we have seen, simulation speed determines the use of dynamic time-steps, which produces a non-uniformly sampled response. So, either one forces the simulator to use a fixed time-step, e.g., using the simulator’s time-step ceiling option – turning the time-domain simulation highly inefficient – or relies on the hidden interpolation and resampling functions operated before the DFT is applied, which are not error free – turning the frequency-domain representation potentially inaccurate.

If the signal is not periodic, then the DFT will make it periodic, with a period equal to the simulated time, producing an error that is visible as *spectral leakage* [10], i.e., in which the energy of individual spectral lines is distributed onto (leaks to) their vicinity. This error is caused by the time-domain signal discontinuities that may exist in the beginning and end points of the constructed periodic signal and is illustrated in Figure 2.7. It is particularly harmful in distortion analysis where the distortion components may be totally immersed in the spectral leakage error. Hence, it is quite common to adopt measures to mitigate spectral leakage, increasing the simulation time (in order to reduce the relative weight of these discontinuities in the overall signal) or to simply eliminate these discontinuities, forcing the output signal to start and end in the same zero value – **y**(0) = **y**(*T*) = 0 – multiplying **y**(*t*) by an appropriate window function [10].

Figure 2.7 Illustrative example of the spectral leakage generated when an aperiodic time-domain function is (discrete) Fourier transformed to the frequency domain. Please note the spectral leakage manifested as erroneous spectral power around the desired signal and its mitigation by windowing. In this example, VdTω is the frequency-domain representation of five periods of the result of our transient simulation, *v*_{D}(*t*), collected between 50 ns and 55 ns, while VdHω is the corresponding signal passed through a Hanning window [10]. The dotted spectrum, VdSω, is the same five periods but now collected after the steady-state was reached, i.e., collected between 380 ns and 385 ns.

This lack of periodicity of the output signal can be due to the nature of the problem at hand. For example, a two-tone signal of *f*_{1} and *f*_{2} is only periodic if its central frequency *f*_{c} = (*f*_{1} + *f*_{2})/2 is a multiple of its frequency separation *Δf* = *f*_{2} − *f*_{1}. But it may also be due to the presence of the transient response. In such a case, it is necessary to truncate the signal to its last few periods. This process is usually quite frustrating when simulating RF circuits since the length of this annoying transient response can be many orders of magnitude higher than the length of the desired period, or of a few periods. Indeed, the length of these long transients is usually determined by the time-constants associated to the large dc decoupling capacitors and RF chokes of bias networks.

Therefore, this transient time-domain simulation of microwave and RF circuits tends to be either inefficient, inaccurate, or both, which motivated the development of analysis techniques specifically devoted to calculate the periodic steady-state.

#### 2.2.2 Periodic Steady-State Analysis in the Time-Domain

Although one might think that the integration of the transient response is unavoidable, as it corresponds to the way the real circuit behaves, that is not necessarily the case. To understand why, let us recall the previously discussed problem that consists in determining the response of a forced system to a periodic excitation of period *T*. To guarantee that this problem is solvable, let us also consider that the steady-state response is, indeed, periodic, of the same period *T* of the excitation.

Obviously, the complete response starts with the aperiodic transient, which is then followed by the periodic steady-state. This transient is naturally governed by the system, the excitation and the assumed initial conditions, or initial state, while the periodic steady-state is only dependent on the system and the excitation (in fact, its period is solely determined by the excitation). Therefore, different initial states lead to distinct transient responses, but always to the same steady-state. Actually, a close observation of Figures 2.3 and 2.4 indicates that different initial states may lead to transients of very different length, depending on whether we start from an initial state that is close, or very far, from the final cyclic orbit that describes the periodic steady-state. In fact, the notion of state tells us that all states of that cyclic orbit keep the system in the same orbit, i.e., lead to a transient response of zero length or, in other words, a response without transient.

Although the mere existence of a transient simulation without a transient may sound absurd, you can convince yourself that this can indeed be true by running the following example. Pick up any lumped-element circuit you know, which, being excited by a periodic stimulus, produces a periodic steady-state response. Perform a normal transient simulation using a SPICE-like circuit simulator for a time that is sufficiently long to let the circuit reach its periodic steady-state. You will naturally see a transient response followed by the periodic steady-state. Analyze this steady-state response, observing and recording the values of all state-variables, for example all capacitor voltages and inductor currents, at the beginning (arbitrary) of one period of the excitation. Now, run the same transient simulation but use the simulator option that allows you to set the initial values of all energy-storage elements to the values previously recorded (use all precision digits the simulator provides). You can run the transient simulator for a much smaller time-span as you will see that the response is completely periodic, i.e., that there is no transient response.

We actually performed this experiment in our circuit example, starting with the same initial conditions and excitation used for our previous transient simulation, i.e. *v*_{G}(0) = 3.0 V, *v*_{D}(0) = 10.0 V, *i*_{L}(0) = −10/6.6 A and *v*_{I}(*t*) = 3 + 20 cos (2*π*10^{9}*t*)vIt=3+20cos2π109t, measured these variables at *t* = 384 ns, and then used these values [*v*_{G}(384 ns) = 3.118026263640600 V, *v*_{D}(0) = 11.183192073318860 V and *i*_{L}(0) = −2.625494710083924 A] as the new initial conditions. As expected, the simulator response went directly to the steady-state response of Figures 2.3 and 2.4 without any transient.

By the way, this example leads us to this very curious question: which were, after all, the energy-storage initial conditions used by default by the simulator in the first run? And how did it select them? You will find a response to these questions in the next section.

Now that we are convinced that our problem of bypassing the transient response may have a solution, it becomes restricted to find a way to determine these appropriate initial states (see Exercise 2.2). There are, at least, three known techniques to do that.

The first method we can imagine to determine the proper initial states (i.e., the ones that are already contained in the cyclic orbit) was already discussed. Indeed, although the time-step integration starts from an initial state that may be very far from the desired one, the state it reaches after one period, is closer. Then, this new updated state is inherently used as the initial state for the time-step integration of the second period, from which a new update is found, in a process that we know is convergent to the desired proper initial state. Our problem is that this process is usually too slow due to the large dc decoupling bias network capacitors and RF chokes. In an attempt to speed up the transient calculation, a simple dc analysis (in which all capacitors and inductors are treated as open or short circuits, respectively) is performed prior to any transient simulation. Then, the dc open circuit voltages across the capacitors and the short-circuit currents across the inductors are used as the initial conditions of the corresponding voltages and currents in the actual transient simulation, unless other initial conditions are imposed by the user. These dc values constitute, therefore, the default initial state that the simulator uses. When these dc quiescent values are already very close to the actual dc bias values, as is the case of all linear or quasi-linear circuits, such as small-signal or class A power amplifiers, the transient response is short and this method of finding the default initial conditions can be very effective. Otherwise, i.e., in case strongly nonlinear circuits such as class C amplifiers or rectifiers are to be simulated, these default values are still very far from the actual bias point and a large transient response subsists.

The second and third methods find the proper initial condition using a completely different strategy: they change the nature of the numerical problem.

##### 2.2.2.1 Initial-Value and Boundary-Value Problems

Up to now, we have considered that our circuit analysis could be formulated as an *initial-value problem*, i.e., a problem defined by setting initial boundary conditions (the initial state) and a rule for the time evolution of the other internal time-steps (the state-equation). No boundary conditions are defined at the end of the simulation time; as we know, this simulation time can be arbitrarily set by the user. When we are willing to determine the periodic steady-state, and we know a priori that the steady-state period is the period of the excitation, we can also define a final boundary condition and reformulate the analysis as a *boundary-value problem*. In this case the initial state, the internal rule, and the final state are all a priori defined. Well, rigorously speaking, what is defined is the internal rule and an equality constraint for the initial and the final states: **s**(*t*_{0} + *T*) = **s**(*t*_{0}).

##### 2.2.2.2 Finite-Differences in Time-Domain

Having this boundary-value problem in mind, the second method is known as the finite-differences in time-domain and is directly formulated from the discretization of the state-equation and the periodic boundary condition **s**(*t*_{0} + *T*) = **s**(*t*_{0}). Using again a zero-order backward Euler discretization in *N* + 1 time-samples, we can define the following (incomplete) system (for each state-variable or each circuit node) of *N* equations in *N* + 1 unknowns

which, augmented by the boundary equation **s**(*t*_{N}) = **s**(*t*_{0}), constitutes a (now complete) system of *N* + 1 equations in *N* + 1 unknowns that can be readily solved through a *N* + 1-dimensional Newton-Raphson iteration scheme.

##### 2.2.2.3 Shooting Method

The third alternative to find the proper initial condition that leads to the desired steady-state is known as the *shooting* or *periodic steady-state*, *PSS*, method. Conceptually, it can be described as a trial-and-error process in which we start by simulating the circuit for one period with some initial state guess; for example, the one given by an a priori dc analysis, **s**(*t*_{0})_{1}st01. In our circuit example, this attempt would lead to the first set of trajectories depicted in the phase-space plots of Figure 2.8, in which **s**(*t*_{N})_{1}≠**s**(*t*_{0})_{1}stN1≠st01, indicating, as expected, that we failed the periodic steady-state. By slightly changing the tested initial condition by some Δ**s**(*t*_{0})Δst0, say **s**(*t*_{0})_{2} = **s**(*t*_{0})_{1} + Δ**s**(*t*_{0})st02=st01+Δst0, we can try the alternative attempt identified as the second set of trajectories of Figure 2.8. Although this one led to another failure, since **s**(*t*_{N})_{2}stN2 is still different from **s**(*t*_{0})_{2}st02, it may provide us an idea of the sensitivity of the final condition to the initial condition, Δ**s**(*t*_{N})/Δ**s**(*t*_{0})ΔstN/Δst0, which we then may use to direct our search for better trial **s**(*t*_{0})_{3}st03, **s**(*t*_{0})_{3}st03, …, **s**(*t*_{0})_{K}st0K, until the desired steady-state is achieved, i.e., in which **s**(*t*_{N})_{K} = *s*(*t*_{0})_{K}stNK=st0K.

Figure 2.8 Illustration of the shooting-Newton method of our circuit example excited by *v*_{I}(*t*) = 3 + 20 cos (2*π*10^{9}*t*)vIt=3+20cos2π109t, where all the required five Newton iterations are shown. As done for the previous transient analysis, we started from the initial conditions derived from dc analysis, *v*_{G}(0) = 3.0 V, *v*_{D}(0) = 10.0 V and *i*_{L}(0) = −10/6.6 A, and ended up on *v*_{G}(0) = 3.118026121336410 V, *v*_{D}(0) = 11.183189430567937 V and *i*_{L}(0) = −2.625492967125298 A.