J. -H. Seol, K. Choo, D. Blaauw, D. Sylvester and T. Jang, "Reference
Oversampling PLL Achieving โ256-dB FoM and โ78-dBc Reference Spur," in
IEEE Journal of Solid-State Circuits, vol. 56, no. 10, pp.
2993-3007, Oct. 2021 [https://sci-hub.se/10.1109/JSSC.2021.3089930]
K. J. Wang, A. Swaminathan and I. Galton, "Spurious Tone Suppression
Techniques Applied to a Wide-Bandwidth 2.4 GHz Fractional-N PLL," in
IEEE Journal of Solid-State Circuits, vol. 43, no. 12, pp.
2787-2797, Dec. 2008 [https://sci-hub.se/10.1109/JSSC.2008.2005716]
Frequency Divider
Gunnman, Kiran, and Mohammad Vahidfar. Selected Topics in RF,
Analog and Mixed Signal Circuits and Systems. Aalborg: River
Publishers, 2017
Large values of N lowers the loop BW which is bad for jitter
MMD (Multimodulus Divider)
TODO ๐
Noise in dividers (jitter
generation)
S. Levantino, L. Romano, S. Pellerano, C. Samori and A. L. Lacaita,
"Phase noise in digital frequency dividers," in IEEE Journal of
Solid-State Circuits, vol. 39, no. 5, pp. 775-784, May 2004 [https://sci-hub.se/10.1109/JSSC.2004.826338]
Lacaita, Andrea Leonardo, Salvatore Levantino, and Carlo Samori.
Integrated frequency synthesizers for wireless systems.
Cambridge University Press, 2007.
W. F. Egan, "Modeling phase noise in frequency dividers," in IEEE
Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol.
37, no. 4, pp. 307-315, July 1990 [https://sci-hub.se/10.1109/58.56498]
Multiplying the frequency of a signal by a factor of N using an
ideal frequency multiplier increases the phase noise of
the multiplied signal by \(20\log(N)\)
dB.
Similarly dividing a signal frequency by \(N\) reduces the phase noise of the output
signal by \(20\log(N)\) dB
The sideband offset from the carrier in the frequency
multiplied/divided signal is the same as for the original signal.
\(20\log (N)\)
Rule
If the carrier frequency of a clock is divided down by a factor of
\(N\) then we expect the phase noise to
decrease by \(20\log(N)\).The primary
assumption here is a noiseless conventional digital
divider.
The \(20\log(N)\) rule only applies
to phase noise and not integrated phase noise or phase
jitter. Phase jitter should generally measure about the
same.
What About Phase Jitter?
We integrate SSB phase noise L(f) [dBc/Hz] to
obtain rms phase jitter in seconds as follows for โbrick wallโ
integration from f1 to f2 offset frequencies in Hz and where f0 is the
carrier or clock frequency.
Note that the rms phase jitter in seconds is inversely proportional
to f0. When frequency is divided down, the phase noise, L(f),
goes down by a factor of 20log(N). However, since the frequency goes
down by N also, the phase jitter expressed in units of time is
constant.
Therefore, phase noise curves, related by 20log(N), with the same
phase noise shape over the jitter bandwidth, are expected to
yield the same phase jitter in seconds.
spurs are carrier or clock frequency spectral
imperfections measured in the frequency domain just like phase noise.
However, unlike phase noise they are discrete frequency
components.
Spurs are deterministic
Spur power is independent of bandwidth
Spurs contribute bounded peak jitter in the time domain
Sources of Spurs:
External (coupling from other noisy block) Supply, substrate, bond
wires, etc.
Internal (int-N/fractional-N operation)
Frac spur: Fractional divider (multi-modulus and
frequency accumulation)
the frequency resolution, is equal to the reference
frequency, meaning that only integer multiples of the reference
frequency can be synthesized
Stability requirements limit the loop bandwidth to about
one tenth of the reference frequency; therefore
decreasing the reference frequency increases the settling time as
the loop bandwidth also has to be decreased
a reduced loop bandwidth allows less suppression of the VCOโs
inherent phase noise
Another drawback of the integer-N PLL is the trade-off
between phase noise and settling time when the divider ratio
becomes large (The contributions to the output phase noise of
almost all PLL building blocks, except the VCO, are multiplied
by the division ratio)
if a small reference frequency is chosen, the reference spur
in the output phase noise is located at a smaller offset
frequency
Fractional-N
Dither Feedback Divider Ratio by a delta-sigma
modulator
Frequency Accumulation
Switched Capacitor Banks
Q: why \(R_b\) ?
A: TODO ๐
Hu, Yizhe. "Flicker noise upconversion and reduction mechanisms in
RF/millimeter-wave oscillators for 5G communications." PhD diss.,
2019.
S. D. Toso, A. Bevilacqua, A. Gerosa and A. Neviani, "A thorough
analysis of the tank quality factor in LC oscillators with switched
capacitor banks," Proceedings of 2010 IEEE International Symposium
on Circuits and Systems, Paris, France, 2010, pp. 1903-1906
False locking
TODO ๐
divider failure
even-stage ring oscillator ( multipath ring oscillators)
DLL: harmonic locking, stuck locking
clock edge impact
ck1 is div2 of ck0
edge of ck0 is affected differently by ck1
edge of ck1 is affected equally by ck0
Tri-gate Clock MUX vs
Pass-gate Clock MUX
TODO ๐
Why Type 2 PLL ?
Type: # of integrators within the loop
Order: # of poles in the closed-loop
transfer function
Type \(\leq\) Order
That is, to have a wide bandwidth, a high loop gain is required
More importantly, the type 1 PLL has the problem of a
static phase error for the change of an input
frequency
A step response test is an easy way to determine the
bandwidth.
Sum a small step into the control voltage of your oscillator
(VCO or NCO), and measure the 90% to 10% fall time of the
corrected response at the output of the loop filter as shown in this
block diagram
a first order loop \[
BW = \frac{0.35}{t} \space\space\space\space \text{(first order system)}
\] Where \(BW\) is the 3 dB
bandwidth in Hz and \(๐ก\)โ is the
10%/90% rise or fall time.
For second order loops with a typical damping factor of 0.7
this relationship is closer to: \[
BW = \frac{0.33}{t}\space\space\space\space \text{(second order system,
damping factor = 0.7)}
\]
Pulse Code Modulation (PCM) is a method for digitally representing
analog signals by sampling their amplitude at regular intervals and then
encoding these samples into binary numbers
Energy/bit (pJ/b)
1mW/Gbps = 1pJ/bit
Joules are a unit of work or energy.
Watts are a unit of power which is the rate at
which energy is generated or consumed.
modulation depth
The modulation index (or
modulation depth) of a modulation scheme
describes by how much the modulated variable of the carrier signal
varies around its unmodulated level
white noise doesn't mean it has a
Gaussian/normal distribution
The only criteria for a (discrete) signal to be
"white" is for each sample to be independently
taken from the same probability distribution
By understanding input signal's statistical nature, we can gather
more insights about certain requirements for our circuits than just from
frequency domain
The Nyquist rate is the minimum sample rate required
to accurately measure a signal's highest frequency. It's equal to
twice the highest frequency of the
signal
Nyquist frequency
The Nyquist frequency is the highest frequency that can be
represented without aliasing in a discrete signal. It's
equal to half the sampling frequency
Oversampling Ratio (OSR) is defined as the ratio of
the Nyquist frequency\(f_s/2\) to the signal bandwidth\(B\) given by \(\text{OSR}=f_s/2B\)
Summation & Integration
impulse response
Transform
ROC
Summation
\(u(t)\)
\(\frac{1}{s}\)
\(\mathfrak{Re}\{s\}\gt 0\)
Integration
\(u[n]\)
\(\frac{1}{1-z^{-1}}\)
\(|z| \gt 1\)
both are NOT stable
sinc function
where \(W\) is sampling frequency in
Hz
sinc function is square integrable but notabsolutely integrable
Zero-order hold (ZOH)
\[
h_{ZOH}(t) = \text{rect}(\frac{t}{T} - \frac{1}{2}) = \left\{
\begin{array}{cl}
1 & : \ 0 \leq t \lt T \\
0 & : \ \text{otherwise}
\end{array} \right.
\] The effective frequency response is the continuous Fourier
transform of the impulse response \[
H_{ZOH}(f) = \mathcal{F}\{h_{ZOH}(t)\} = T\frac{1-e^{j2\pi fT}}{j2\pi
fT}=Te^{-j\pi fT}\text{sinc}(fT)
\] where \(\text{sinc}(x)\) is
the normalized sinc function \(\frac{\sin(\pi
x)}{\pi x}\)
The Laplace transform transfer function of the ZOH is found by
substituting \(s=j2\pi f\)\[
H_{ZOH}(s) = \mathcal{L}\{h_{ZOH}(t)\}=\frac{1-e^{-sT}}{s}
\]
The probability distribution of the sum of two or more
independent random variables is the
convolution of their individual distributions.
Thermal noise
Thermal noise in an ideal resistor is approximately
white, meaning that its power spectral density is
nearly constant throughout the frequency spectrum.
When limited to a finite bandwidth and viewed in the time
domain, thermal noise has a nearly Gaussian amplitude
distribution
Barkhausen criteria
Barkhausen criteria are necessary but not sufficient
conditions for sustainable oscillations
it simply "latches up" rather than oscillates
System Type
Control of Steady-State Error to Polynomial Inputs: System Type
control systems are assigned a type number according
to the maximum degree of the input polynominal for which the
steady-state error is a finite constant. i.e.
Type 0: Finite error to a step (position error)
Type 1: Finite error to a ramp (velocity error)
Type 2: Finite error to a parabola (acceleration error)
The open-loop transfer function can be expressed as \[
T(s) = \frac{K_n(s)}{s^n}
\]
where we collect all the terms except the pole (\(s\)) at eh origin into \(K_n(s)\),
The polynomial inputs, \(r(t)=\frac{t^k}{k!} u(t)\), whose transform
is \[
R(s) = \frac{1}{s^{k+1}}
\]
Then the equation for the error is simply \[
E(s) = \frac{1}{1+T(s)}R(s)
\]
Application of the Final Value Theorem to the error formula
gives the result
\(H(j\omega)\) is obtained as below
\[
H(j\omega) = \frac{1}{1+j\omega}
\]
Different Variants of
the PSD Definition
In the practice of engineering, it has become customary to use
slightly different variants of the PSD definition, depending on the
particular application or research field.
Two-Sided PSD, \(S_x(f)\)
this is a synonym of the PSD defined as the Fourier Transform of the
autocorrelation.
One-Sided PSD, \(S'_x(f)\)
this is a variant derived from the two-sided PSD by
considering only the positive frequency semi-axis.
To conserve the total power, the value of the
one-sided PSD is twice that of the two-sided PSD \[
S'_x(f) = \left\{ \begin{array}{cl}
0 & : \ f \geq 0 \\
S_x(f) & : \ f = 0 \\
2S_x(f) & : \ f \gt 0
\end{array} \right.
\]
Note that the one-sided PSD definition makes sense only if the
two-sided is an even function of \(f\)
If \(S'_x(f)\) is even
symmetrical around a positive frequency \(f_0\), then two additional definitions can
be adopted:
Single-Sideband PSD, \(S_{SSB,x}(f)\)
This is obtained from \(S'_x(f)\) by moving the origin of the
frequency axis to \(f_0\)\[
S_{SSB,x}(f) =S'_x(f+f_0)
\] This concept is particularly useful for describing phase or
amplitude modulation schemes in wireless communications, where \(f_0\) is the carrier frequency.
Note that there is no difference in the values of the one-sided
versus the SSB PSD; it is just a pure translation on the frequency
axis.
Double-Sideband PSD, \(S_{DSB,x}(f)\)
this is a variant of the SSB PSD obtained by considering only the
positive frequency semi-axis.
As in the case of the one-sided PSD, to conserve total power, the
value of the DSB PSD is twice that of the SSB \[
S_{DSB,x}(f) = \left\{ \begin{array}{cl}
0 & : \ f \geq 0 \\
S_{SSB,x}(f) & : \ f = 0 \\
2S_{SSB,x}(f) & : \ f \gt 0
\end{array} \right.
\]
Note that the DSB definition makes sense only if the SSB PSD is even
symmetrical around zero
Poles and Zeros of
transfer function
poles
\[
H(s) = \frac{1}{1+s/\omega_0}
\]
magnitude and phase at \(\omega_0\)
and \(-\omega_0\)\[\begin{align}
H(j\omega_0) &= \frac{1}{1+j} = \frac{1}{\sqrt{2}}e^{-j\pi/4} \\
H(-j\omega_0) &= \frac{1}{1-j} = \frac{1}{\sqrt{2}}e^{j\pi/4}
\end{align}\]
Unlike the quantization noise and the thermal noise, the impact of
the clock jitter on the ADC performance depends on the input signal
properties like its PSD
The error between the ideal sampled signal and the
sampling with clock jitter can be treated as noise and it results
in the degradation of the SNR of the ADC
K. Tyagi and B. Razavi, "Performance Bounds of ADC-Based Receivers
Due to Clock Jitter," in IEEE Transactions on Circuits and Systems
II: Express Briefs, vol. 70, no. 5, pp. 1749-1753, May 2023 [https://www.seas.ucla.edu/brweb/papers/Journals/KT_TCAS_2023.pdf]
N. Da Dalt, M. Harteneck, C. Sandner and A. Wiesbauer, "On the jitter
requirements of the sampling clock for analog-to-digital converters," in
IEEE Transactions on Circuits and Systems I: Fundamental Theory and
Applications, vol. 49, no. 9, pp. 1354-1360, Sept. 2002 [https://sci-hub.se/10.1109/TCSI.2002.802353]
M. Shinagawa, Y. Akazawa and T. Wakimoto, "Jitter analysis of
high-speed sampling systems," in IEEE Journal of Solid-State Circuits,
vol. 25, no. 1, pp. 220-224, Feb. 1990 [https://sci-hub.se/10.1109/4.50307]
In both DAC or ADC cases, doubling the timing jitter doubles the
noise level
Also, doubling the frequency or amplitude doubles the jitter induced
noise - SNR is not improved
Boris Murmann ISSCC 2022 SC1: Introduction to ADCs/DACs: Metrics,
Topologies, Trade Space, and Applications [pdf]
S. Kim, K. -Y. Lee and M. Lee, "Modeling Random Clock Jitter Effect
of High-Speed Current-Steering NRZ and RZ DAC," in IEEE Transactions
on Circuits and Systems I: Regular Papers, vol. 65, no. 9, pp.
2832-2841, Sept. 2018 [https://sci-hub.se/10.1109/TCSI.2018.2821198]
Martin Clara. High-Performance D/A-Converters - Application to
Digital Transceivers, 2013 [pdf]
Chun-Hsien Su (่็ด่ณข). Design of Oversampled Sigma-Delta Data
Converters. July, 2006 [pdf]
Sampled Thermal Noise
The aliasing of the noise, or noise
folding, plays an important role in switched-capacitor as it
does in all switched-capacitor filters
Assume for the moment that the switch is always closed (that
there is no hold phase), the single-sided noise density would be
\(v_s[n]\) is the sampled version of
\(v_{RC}(t)\), i.e. \(v_s[n]= v_{RC}(nT_C)\)\[
S_s(e^{j\omega}) = \frac{1}{T_C}
\sum_{k=-\infty}^{\infty}S_{RC}(j(\frac{\omega}{T_C}-\frac{2\pi
k}{T_C})) \cdot d\omega
\] where \(\omega \in [-\pi,
\pi]\), furthermore \(\frac{d\omega}{T_C}= d\Omega\)\[
S_s(j\Omega) = \sum_{k=-\infty}^{\infty}S_{RC}(j(\Omega-k\Omega_s))
\cdot d\Omega
\]
The noise in \(S_{RC}\) is a
stationary process and so is uncorrelated over \(f\) allowing the \(N\) rectangles to be combined by simply
summing their noise powers
Matt Pharr, Wenzel Jakob, and Greg Humphreys. 2016. Physically Based
Rendering: From Theory to Implementation (3rd. ed.). Morgan Kaufmann
Publishers Inc., San Francisco, CA, USA.
R. Gregorian and G. C. Temes. Analog MOS Integrated Circuits for
Signal Processing. Wiley-Interscience, 1986
Chembian Thambidurai, "Power Spectral Density of Pulsed Noise
Signals" [link]
White Noise Modulation
Noisy Resistor & Clocked Switch
\[
v_t (t) = v_i(t)\cdot m_t(t)
\]
where \(v_i(t)\) is input
white noise, whose autocorrelation is \(A\delta(\tau)\), and \(m_t(t)\) is periodically operating switch,
then autocorrelation of \(v_t(t)\)\[\begin{align}
R_t (t_1, t_2) &= E[v_t(t_1)\cdot v_t(t_2)] \\
&= R_i(t_1, t_2)\cdot m_t(t_1)m_t(t_2)
\end{align}\]
Then \[\begin{align}
R_t(t, t-\tau) &= R_i(\tau)\cdot m_t(t)m_t(t-\tau) \\
& = A\delta(\tau) \cdot m_t(t)m_t(t-\tau) \\
& = A\delta(\tau) \cdot m_t(t)
\end{align}\] Because \(m_t(t)=m_t(t+T)\), \(R_t(t, t-\tau)\) is is periodic in the
variable \(t\) with period \(T\)
The time-averaged ACF is denoted as \(\tilde{R_t}(\tau)\)
\[
\tilde{R}_{t}(\tau) = m\cdot A\delta(\tau)
\] That is, \[
S_t(f) = m\cdot S_{A}(f)
\]
Much like sinusoidal-steady-state signal analysis,
steady-state noise analysis methods assume an input
\(x(t)\) of infinite
duration, which is a Wide-Sense Stationary (WSS) random
process
Frequency-domain Analysis
Time-domain Analysis
The output \(y(t)\) of a linear
time-invariant (LTI) system \(h(t)\)\[\begin{align}
R_{yy}(\tau) &= R_{xx}(\tau)*[h(\tau)*h(-\tau)] \\
&= S_{xx}(0)\delta(\tau) * [h(\tau)*h(-\tau)] \\
&= S_{xx}(0)[h(\tau)*h(-\tau)] \\
&= S_{xx}(0) \int_\alpha h(\alpha)h(\alpha-\tau)d\alpha
\end{align}\]
with WSS white noise input \(x(t)\),
\(R_{xx}(\tau)=S_{xx}(0)\delta(\tau)\),
therefore
Assuming the noise applied duration is much less than the time
constant, the output voltage does not reach steady-state and WSS
noise analysis does not apply
Time-domain Analysis
The step noise input\(x(t) = \nu(t)u(t)\) where an underlying
WSS process\(\nu(t)\)\[
R_{xx}(t_1,t_2) = E[x(t_1)x(t_2)] = R_{\nu\nu}(t_1,
t_2)u(t_1)u(t_2)=R_{\nu\nu}(t_1, t_2) \tag{3.28}
\]
That is \[
\sigma^2_y (t)= R_{yy}(t_1,t_2)|_{t_1=t_2=t}=S_{xx}(0)\int_{-\infty}^t
|h(\tau)|^2d\tau \tag{3.33}
\]
\(t\), the upper limit of
integration is just intuitive, which lacks strict derivation
Because stable systems have impulse responses that decay to
zero as time goes to infinity, the output
noise variance approaches the WSS result as time approaches
infinity
Because the definition of the PSD assumes that the variance of the
noise process is independent of time, the PSD of a non-stationary
process is not very meaningful
Input Referred Noise
Noise Voltage to Timing Jitter Conversion & noise
gain
Suppose \(t_i \gg \tau_0\)\[
\overline{v_n^2}(t_i) = 4kTR_n\frac{1}{4\tau_o}
\coth(\frac{t_i}{2\tau_o}) \approx 4kTR_n\frac{1}{4\tau_o}
= \frac{G_n}{G_m}\frac{kT}{C}\frac{1}{A_0}
\] As expected, the input referred noise voltage is \(kT/C\) noise
Pharr, Matt; Humphreys, Greg. (28 June 2010). Physically Based
Rendering: From Theory to Implementation. Morgan Kaufmann. ISBN
978-0-12-375079-2. Chapter
7 (Sampling and reconstruction)
Alan V Oppenheim, Ronald W. Schafer. Discrete-Time Signal Processing,
3rd edition
we get \(C_\text{out,eq}=
(1+\frac{1}{A_v})C_c\simeq C_c\)
Pole Splitting
Generic circuit in textbook
In addition to lowering the required capacitor value, Miller
compensation entails a very important property: it moves the output pole
away from the origin. This effect is called pole
splitting
The 1st stage is replaced with Thevenin equivalent circuit , \(V_i \cong V_i \cdot g_{m1}R_{o1}\)
\[\begin{align}
\frac{V_i-V_{o1}}{R_{o1}} &= V_{o1}\cdot sC_{o1}+(V_{o1}-V_o)\cdot
sC_c \\
V_{o1} &= \frac{V_i+sR_{o1}C_cV_o}{1+sR_{o1}(C_{o1}+C_c)}
\end{align}\]\[
(V_{o1}-V_o)sC_c=g_{m2}V_{o1}+V_o(\frac{1}{R_{o2}+sC_L})
\] substitute \(V_{o1}\), we
get
\(s^3\) terms in denominator \[
H_3 = s^3\cdot(R_{o1}R_{o2}R_c+R_{o1}R_{o2}R_{sw} +R_{o1}R_cR_{sw})\cdot
C_{o1}C_cC_L
\]\(s^2\) terms in denominator
\[\begin{align}
H_2
&=s^2\cdot(R_{o1}R_{o2}C_{o1}C_c+R_{o1}R_{o2}C_{o1}C_L+R_{o2}R_cC_cC_L+R_{o1}R_{o2}C_cC_L+R_{o1}R_cC_{o1}C_c\\
&+R_{o2}R_{sw}C_cC_L+R_{o1}R_{sw}C_cC_L\cdot
g_{m2}R_{o2}+R_{o1}R_{sw}C_{o1}C_L+R_{sw}R_cC_cC_L+R_{o1}R_{sw}C_cC_L)
\end{align}\]
\(s^1\) term in denominator \[
H_1=s(R_{o1}\cdot
g_{m2}R_{o2}C_c+R_{o1}C_{o1}+R_cC_c+R_{o1}C_c+R_{o2}C_c+R_{o2}C_L+R_{sw}C_L)
\]\(s^0\) term in denominator
\[
H_0=1
\] set \(R_c=0\) and \(R_{sw}=0\), the \(H_*\) reduced to \[\begin{align}
H_3 &= 0 \\
H_2 &=s^2R_{o1}R_{o2}(C_{o1}C_c+C_{o1}C_L+C_cC_L) \\
H_1&=s(R_{o1}\cdot
g_{m2}R_{o2}C_c+R_{o1}C_{o1}+R_{o1}C_c+R_{o2}C_c+R_{o2}C_L) \\
H_0&=1
\end{align}\] That is \[
H=s^2R_{o1}R_{o2}(C_{o1}C_c+C_{o1}C_L+C_cC_L)+s(R_{o1}\cdot
g_{m2}R_{o2}C_c+R_{o1}C_{o1}+R_{o1}C_c+R_{o2}C_c+R_{o2}C_L)+1
\]
which is same with our previous analysis of Generic circuit in
textbook
And we know \[
\frac{V_o}{V_{o2}}=\frac{1}{1+sR_{sw}C_L}
\] Finally, we get \(\frac{V_o}{V_i}\)\[\begin{align}
\frac{V_o}{V_i} &= \frac{V_{o2}}{V_i} \cdot \frac{V_o}{V_{o2}} \\
&= -g_{m2}R_{o2}\frac{\left[ sC_c(R_c-1/g_{m2})+1
\right](sR_{sw}C_L+1)}{H_3+H_2+H_1+1} \cdot \frac{1}{1+sR_{sw}C_L} \\
&= -g_{m2}R_{o2}\frac{ sC_c(R_c-1/g_{m2})+1}{H_3+H_2+H_1+1}
\end{align}\]
The loop transfer function is \[
\frac{V_o}{V_i} =-g_{m1}R_{o1}g_{m2}R_{o2}\frac{
sC_c(R_c-1/g_{m2})+1}{H_3+H_2+H_1+1}
\]
The poles can be deduced \[\begin{align}
\omega_1 &= \frac{1}{R_{o1}\cdot g_{m2}R_{o2}C_c} \\
\omega_2 &= \frac{1}{1+g_{m2}R_{sw}}\cdot \frac{g_{m2}}{C_L} \\
&= \frac{1}{(gm_2^{-1}+R_{sw})C_L}
\end{align}\]
The pole \(\omega_2=\frac{1}{gm_2^{-1}C_L}\) is
changed to \(\omega_2=\frac{1}{(gm_2^{-1}+R_{sw})C_L}\)
In order to cancell \(\omega_2\)
with \(\omega_z\), \(R_c\) shall be increased
Following demonstrate how derive \(f_{nd}\) from Razavi's equation. We copy
\(\omega_2\) here \[
\omega_2 = \frac{R_{o1}C_c\cdot
g_{m2}R_{o2}+R_{o2}(C_c+C_L)+R_{o1}(C_{o1}+C_c)}{R_{o1}R_{o2}(C_cC_{o1}+C_LC_{o1}+C_LC_c)}
\] which can be reduced as below
This cascode compensation topology is
popularly known as ahuja compensation
The cause of the positive zero is the feedforward current through
\(C_m\).
To abolish this zero, we have to cut the feedforward path and create
a unidirectional feedback through \(C_m\).
Adding a resistor(nulling resistor) is one way to mitigate the
effect of the feedforward current.
Another approach uses a current buffer cascode to pass the
small-signal feedback current but cut the feedforward current
People name this approach after the author Ahuja
The benefits of Ahuja compensation over Miller compensation
are severa
better PSRR
higher unity-gain bandwidth using smaller compensation
capacitor
ability to cope better with heavy capacitive and resistive
loads
Of course, , if the capacitance at the gate of \(M_1\) is taken into account, pole splitting
is less pronounced.
including \(r_\text{o2}\)
\[
\frac{V_{out}}{I_{in}} \approx
\frac{-g_{m1}R_SR_L(g_{m2}+C_Cs)}{\frac{R_S+r_\text{o2}}{r_\text{o2}}R_LC_LC_Cs^2+g_{m1}g_{m2}R_LR_SC_Cs+g_{m2}}
\] The poles as
and zero is not affected, which is \(\omega_z =\frac{g_{m2}}{C_C}\)
the above model simulation result is shown below
the zero is located between two poles
take into the capacitance at the gate of \(M_1\) and all other second-order effect
intuitive analysis of zero
miller compensation
zero in the right half plane \[
g_\text{m1}V_P = sC_c V_P
\]
cascode compensation
zero in the left half plane \[
g_\text{m2}V_X = - sC_c V_X
\]
Mitigate Impact of Zero
dominant pole\[
\omega_\text{p,d} = \frac {1} {R_\text{eq}g_\text{m9}R_{L}C_{c}}
\]first nondominant pole\[
\omega_\text{p,nd} = \frac {g_\text{m4}R_\text{eq}g_\text{m9}} {C_L}
\]zero\[
\omega_\text{z} = (g_\text{m4}R_\text{eq})(\frac {g_\text{m9}} {C_c})
\] a much greater magnitude than \(g_\text{m9}/C_C\)
ahuja variations
Pole-Zero Compensation
Pole-Zero Compensation is also known as
Lead Compensation, Parallel
Compensation
Note: The dominant pole is at output of the first
stage, i.e. \(\frac{1}{R_{EQ}C_{EQ}}\).
Pole & Zero in transfer
function
Design with operational amplifiers and analog integrated circuits /
Sergio Franco, San Francisco State University. โ Fourth edition
\[
Y = \frac{1}{R_1} + sC_1+\frac{1}{R_c+1/SC_c}
\]
\[\begin{align}
Z &= \frac{1}{\frac{1}{R_1} + sC_1+\frac{1}{R_c+1/SC_c}} \\
&= \frac{R_1(1+sR_cC_c)}{s^2R_1C_1R_cC_c+s(R_1C_c+R_1C_1+R_cC_c)+1}
\end{align}\] If \(p_{1c} \ll
p_{3c}\), two real roots can be found \[\begin{align}
p_{1c} &= \frac{1}{R_1C_c+R_1C_1+R_cC_c} \\
p_{3c} &= \frac{R_1C_c+R_1C_1+R_cC_c}{R_1C_1R_cC_c}
\end{align}\]
The additional zero is \[
z_c = \frac{1}{R_cC_c}
\] Given \(R_c \ll R\) and \(C_c \gg C\)\[\begin{align}
p_{1c} &\simeq \frac{1}{R_1(C_c+C_1)} \simeq \frac{1}{R_1C_c}\\
p_{3c} &= \frac{1}{R_cC_1}+\frac{1}{R_cC_c}+\frac{1}{R_1C_1} \simeq
\frac{1}{R_cC_1}
\end{align}\]
The output pole is unchanged, which is \[
p_2 = \frac{1}{R_LC_L}
\] We usually cancel\(p_2\) with \(z_c\), i.e. \[
R_cC_c=R_LC_L
\]
Phase margin
unity-gain frequency \(\omega_t\)\[
\omega_t = A_\text{DC}\cdot P_{1c} =\frac{g_{m1}g_{m2}R_L}{C_c}
\]
PM=45\(^o\)\[
p_{3c} = \omega_t
\] Then, \(C_c\) and \(R_c\) can be obtained
for the unity-gain frequency \(\omega_t\) we find \[
\omega_t = \sqrt{\frac{1}{2}\cdot \frac{g_{m1}g_{m2}}{C_1C_L}}
\] The parallel compensation shows a remarkably good result. The
new 0 dB frequency lies only a factor \(\sqrt{2}\) lower than the theoretical
maximum
To increase \(\phi_m\), we need to
raise\(C_c\) a bit
while lowering\(R_c\)
in proportion in order to maintain pole-zero cancellation. This causes
\(p_{1c}\) and \(p_{3c}\) to split a bit further apart.
The denominator part of \(H_{closed}(s)\) is \[
D(s) =
\frac{s^2}{(A_0+1)\omega_{p1}\omega_{p2}}+\frac{\frac{1}{\omega_{p1}} +
\frac{1}{\omega_{p2}}+\frac{A_0}{\omega_{z}}}{A_0+1}s+1
\]
Thus, the two poles of the closed-loop transfer function of system
are \[\begin{align}
\omega_{pA} &= \frac{A_0+1}{\frac{1}{\omega_{p1}} +
\frac{1}{\omega_{p2}}+\frac{A_0}{\omega_{z}}}
= \frac{(A_0+1)\omega_{p1} \omega_{p2}}{\omega_{p1} + \omega_{p2} +
\frac{A_0}{\omega_z}\omega_{p1} \omega_{p2}}\\
\omega_{pB} &= \omega_{p1} + \omega_{p2} +
\frac{A_0}{\omega_z}\omega_{p1} \omega_{p2}
\end{align}\]
non-dominant pole \(\omega_{p2}\) and zero \(\omega_z\) are in UGB
\[\begin{align}
\omega_{pA} &\approx \omega_{p2}\\
\omega_{pB} &\approx (1+A_0)\omega_{p1}
\end{align}\] Then, closed-loop transfer function is \[
H_{closed}(s) \approx
\frac{\frac{A_0}{A_0+1}\left(1+\frac{s}{\omega_z}\right)}{\left(1+\frac{s}{(1+A_0)\omega_{p1}}\right)\left(
1+\frac{s}{\omega_{p2}} \right)}
\]
Consider the Laplace transform function of step response, \(X(s)=\frac{1}{s}\)\[
Y(s)=\frac{1}{s}\times H_{closed}(s)
\] Thus, the small-signal step response of the
closed-loop amplifier is \[
y(t)=\frac{A_0}{A_0+1}\left[1-e^{-(A_0+1)\omega_{p1}t}-\left(1-\frac{\omega_{p2}}{\omega_z}\right)e^{-\omega_{p2}t}
\right]u(t)
\] Since, \(\omega_{p2}\ll
(1+A_0)\omega_{p1}\). rewrite the \(y(t)\)\[
y(t)\approx
\frac{A_0}{A_0+1}\left[1-\left(1-\frac{\omega_{p2}}{\omega_z}\right)e^{-\omega_{p2}t}
\right]u(t)
\]
rio = 1; wpx =rio*wpz; wzx =wpz/rio; wp1x = 1*wp1; Ho = A0*(1+s/wzx)/(1+s/wp1x)/(1+s/wp2)/(1+s/wpx); Hc = Ho/(1+Ho); [mag_1p0, phase_1p0, wout_1p0] = bode(Ho, wi); [vo_1p0, to_1p0] = step(Hc, ti);
rio = 1.25; wpx =rio*wpz; wzx =wpz/rio; wp1x = 0.65*wp1; Ho = A0*(1+s/wzx)/(1+s/wp1x)/(1+s/wp2)/(1+s/wpx); Hc = Ho/(1+Ho); [mag_10p0, phase_10p0, wout_10p0] = bode(Ho, wi); [vo_10p0, to_10p0] = step(Hc, ti);
subplot(2,2,1) semilogx(wout_0p1(:)/2/pi, 20*log10(mag_0p1(:)),'b-',LineWidth=2); hold on semilogx(wout_1p0(:)/2/pi, 20*log10(mag_1p0(:)),'r-',LineWidth=2); semilogx(wout_10p0(:)/2/pi, 20*log10(mag_10p0(:)),'g-',LineWidth=2); grid on; xlabel('Hz'); ylabel('Mag (dB)'); legend('\omega_{p2}<\omega_{z}', '\omega_{p2}=\omega_{z}', '\omega_{p2}>\omega_{z}')
subplot(2,2,3) semilogx(wout_0p1(:)/2/pi, phase_0p1(:),'b-',LineWidth=2); hold on semilogx(wout_1p0(:)/2/pi, phase_1p0(:),'r-',LineWidth=2); semilogx(wout_10p0(:)/2/pi, phase_10p0(:),'g-',LineWidth=2); grid on; xlabel('Hz'); ylabel('Phase') legend('\omega_{p2}<\omega_{z}', '\omega_{p2}=\omega_{z}', '\omega_{p2}>\omega_{z}')
subplot(2,2,[24]) plot(to_0p1(:), vo_0p1(:),'b-',LineWidth=2); hold on plot(to_1p0(:), vo_1p0(:),'r-',LineWidth=2); plot(to_10p0(:), vo_10p0(:),'g-',LineWidth=2); grid on; xlabel('time'); ylabel('V') legend('\omega_{p2}<\omega_{z}', '\omega_{p2}=\omega_{z}', '\omega_{p2}>\omega_{z}')
reference
Viola Schaffer, ISSCC 2021 Tutorials Designing Amplifiers for
Stability [pdf]
J. H. Huijsing, "Operational Amplifiers, Theory and Design, 3rd ed.
New York: Springer, 2017"
Razavi, Behzad. Design of Analog CMOS Integrated Circuits. India:
McGraw-Hill, 2017. [pdf]
Sansen, Willy M. Analog Design Essentials. Germany: Springer US,
2006.
Gray, P. R., Hurst, P. J., Lewis, S. H., & Meyer, R. G. (2024).
Analysis and design of analog integrated circuits (Sixth
edition.). John Wiley & Sons, Inc..
Ahuja Compensation
B. K. Ahuja, "An Improved Frequency Compensation Technique for CMOS
Operational Amplifiers," IEEE 1. Solid-State Circuits, vol. 18, no. 6,
pp. 629-633, Dec. 1983. [https://sci-hub.se/10.1109/JSSC.1983.1052012]
U. Dasgupta, "Issues in "Ahuja" frequency compensation technique",
IEEE International Symposium on Radio-Frequency Integration Technology,
2009. [https://sci-hub.se/10.1109/RFIT.2009.5383679]
R. J. Reay and G. T. A. Kovacs, "An unconditionally stable two-stage
CMOS amplifier," in IEEE Journal of Solid-State Circuits, vol.
30, no. 5, pp. 591-594, May 1995 [https://sci-hub.se/10.1109/4.384174]
A. Garimella and P. M. Furth, "Frequency compensation techniques for
op-amps and LDOs: A tutorial overview," 2011 IEEE 54th International
Midwest Symposium on Circuits and Systems (MWSCAS), 2011 [https://sci-hub.se/10.1109/MWSCAS.2011.6026315]
H. Aminzadeh, R. Lotfi and S. Rahimian, "Design Guidelines for
Two-Stage Cascode-Compensated Operational Amplifiers," 2006 13th IEEE
International Conference on Electronics, Circuits and Systems, 2006 [https://sci-hub.se/10.1109/ICECS.2006.379776]
H. Aminzadeh and K. Mafinezhad, "On the power efficiency of cascode
compensation over Miller compensation in two-stage operational
amplifiers," Proceeding of the 13th international symposium on Low power
electronics and design (ISLPED '08), Bangalore, India, 2008 [https://sci-hub.se/10.1145/1393921.1393995]
EE 240B: Advanced Analog Circuit Design, Prof. Bernhard E. Boser [OTA
II, Multi-Stage]
Parallel Compensation
R.Eschauzier "Wide Bandwidth Low Power Operational Amplifiers", Delft
University Press, 1994.
Gene F. Franklin, J. David Powell, and Abbas Emami-Naeini. 2018.
Feedback Control of Dynamic Systems (8th Edition) (8th. ed.). Pearson.
6.7 Compensation
Application Note AN-1286 Compensation for the LM3478 Boost
Controller
B. Y. T. Kamath, R. G. Meyer and P. R. Gray, "Relationship between
frequency response and settling time of operational amplifiers," in IEEE
Journal of Solid-State Circuits, vol. 9, no. 6, pp. 347-352, Dec. 1974,
[https://sci-hub.se/10.1109/JSSC.1974.1050527]
P. R. Gray and R. G. Meyer, "MOS operational amplifier design-a
tutorial overview," in IEEE Journal of Solid-State Circuits, vol. 17,
no. 6, pp. 969-982, Dec. 1982, [https://sci-hub.se/10.1109/JSSC.1982.1051851]
Damping Factor (\(\zeta\)) is
defined for close loop system
We can analyze open loop system in a better perspective because it is
simpler. So, we always use the loop gain analysis to find the phase
margin and see whether the system is stable or not.
The zeros in the right half of the complex plane are called
nonminimum phase zeros. Systems with
nonminimum phase zeros are called nonminimum phase
systems
Zero close to the real pole attenuates the effect of that
pole on the system response
Zeros Tend to Increase the Overshoot of the
System
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
zeta=0.4 omega_n=3 k=1 s=tf('s') W1=1/(s^2+2*zeta*s+1) W2=(1/(k*zeta))*s*W1 W3=W1+W2 figure(1) hold on step(W1,'r') step(W2,'k') step(W3,'b') grid
Nonminimum Phase Zeros โ Effect on the Transient
Response
1 2 3 4 5 6 7 8 9 10 11 12 13
zeta=0.4 omega_n=3 k=1 s=tf('s') S1=1/(s^2+2*zeta*s+1) S2=(1/(k*zeta))*s*S1 S=S1-S2 figure(1) hold on step(S1,'r') step(-S2,'k') step(S,'b') grid
Let \(s=j\omega\) and omit factor,
\[
A_\text{dB}(\omega) = 10\log[1+(\frac{\omega}{\omega _z})^2] -
10\log[1+\frac{\omega^4}{\omega_n^4}+\frac{2\omega^2(2\zeta ^2
-1)}{\omega_n^2}]
\] peaking frequency \(\omega_\text{peak}\) can be obtained via
\(\frac{d A_\text{dB}(\omega)}{d\omega} =
0\)\[
\omega_\text{peak} = \omega_z \sqrt{\sqrt{(\frac{\omega_n}{\omega_z})^4
- 2(\frac{\omega_n}{\omega_z})^2(2\zeta ^2-1)+1} - 1}
\]
Settling Time
One Pole
we have \[
\tau \approx \left(1 + \frac{R_1}{R_2}\right)\frac{1}{A_0\omega_0}=
\frac{1}{\beta \omega_\text{ugb}}
\]
Two Poles
with open-loop transfer function \(A_{OL}=\frac{A_0}{(1+s/\omega_1)(1+s/\omega_2)}\)
and assuming \(\omega_1\) is dominant
pole, then yield closed-loop transfer function
That is \(\omega_n =
\sqrt{\omega_u\omega_2}\), \(\zeta =
\frac{1}{2}\sqrt{\frac{\omega_2}{\omega_u}}\) , where \(\omega_u\approx \beta A_0 \omega_1\) is the
unity gain bandwidth
Rise Time (0% to
100% )
\[
t_r = \frac{\pi - \beta}{\omega_d}=\frac{\pi -
\arctan\frac{\omega_n\sqrt{1-\zeta^2}}{\zeta\omega_n}}{\omega_n\sqrt{1-\zeta^2}}\approx\frac{\pi
-
\arctan\frac{\sqrt{1-\zeta^2}}{\zeta}}{\sqrt{\omega_u\omega_2}\sqrt{1-\zeta^2}}=\frac{\pi
- \arctan\frac{\sqrt{1-\zeta^2}}{\zeta}}{\omega_u\sqrt{k(1-\zeta^2)}}
\] where \(k =
\frac{\omega_2}{\omega_u}\), is the function of PM
Gene F. Franklin, Feedback Control of Dynamic Systems, 8th
Edition
As we know \[
\zeta \omega_n=\frac{1}{2}\sqrt{\frac{\omega_2}{\omega_u}}\cdot
\sqrt{\omega_u\omega_2}=\frac{1}{2}\omega_2
\]
Then \[
t_s = \frac{9.2}{\omega_2}
\]
For \(\text{PM}=70^o\), \(\omega_2 = 2.75\omega_u\), that is \[
t_s \approx \frac{3.35}{\omega_u}
\]
For \(\text{PM}=45^o\), \(\omega_2 = \omega_u\), that is \[
t_s \approx \frac{9.2}{\omega_u}
\]
Above equation is valid only for underdamped, \(\zeta=\frac{1}{2}\sqrt{\frac{\omega_2}{\omega_u}}\lt
1\), that is \(\omega_2\lt
4\omega_u\)
2 Stage RC filter
High Pass Filter
Since \(1/sC_1+R_1 \gg R_0\)\[
\frac{V_m}{V_i}(s) \approx \frac{R_0}{R_0 + 1/sC_0} =
\frac{sR_0C_0}{1+sR_0C_0}
\]step response of \(V_m\)\[
V_m(t) = e^{-t/R_0C_0}
\] where \(\tau = R_0C_0\)
And \(V_o(s)\) can be expressed as
\[\begin{align}
\frac{V_o}{V_i}(s) & \approx \frac{sR_0C_0}{1+sR_0C_0} \cdot
\frac{sR_1C_1}{1+sR_1C_1} \\
&= \frac{sR_0C_0R_1C_1}{R_0C_0-R_1C_1}\left(\frac{1}{1+sR_1C_1} -
\frac{1}{1+sR_0C_0}\right)
\end{align}\]
Gene F. Franklin, J. David Powell, and Abbas Emami-Naeini. 2018.
Feedback Control of Dynamic Systems (8th Edition) (8th. ed.).
Pearson. [pdf]
Katsuhiko Ogata, Modern Control Engineering, 5th edition [pdf]
C. T. Chuang, "Analysis of the settling behavior of an operational
amplifier," in IEEE Journal of Solid-State Circuits, vol. 17,
no. 1, pp. 74-80, Feb. 1982 [https://sci-hub.se/10.1109/JSSC.1982.1051689]
Often, AC-driven circuits can be mistaken as non-linear as the basis
that determines the linearity of a circuit is the relationship between
the voltage and current.
While an AC signal varies with time, it still exhibits a linear
relationship across elements like resistors, capacitors, and inductors.
Therefore, AC driven circuits are linear.
Phasor
Phasor concept has no real physical significance. It is just a
convenient mathematical tool.
Phasor analysis determines the steady-state response to a linear
circuit driven by sinusoidal sources with frequency \(f\)
If your circuit includes transistors or other nonlinear components,
all is not lost. There is an extension of phasor analysis to nonlinear
circuits called small-signal analysis in which you linearize the
components before performing phasor analysis - AC analyses of SPICE
A sinusoid is characterized by 3 numbers, its amplitude, its phase,
and its frequency. For example \[
v(t) = A\cos(\omega t + \phi) \tag{1}
\] In a circuit there will be many signals but in the case of
phasor analysis they will all have the same frequency. For this reason,
the signals are characterized using only their amplitude and
phase.
The combination of an amplitude and
phase to describe a signal is the
phasor for that signal.
Thus, the phasor for the signal in \((1)\) is \(A\angle \phi\)
In general, phasors are functions of frequency
Often it is preferable to represent a phasor using complex
numbers rather than using amplitude and phase. In this case we
represent the signal as: \[
v(t) = \Re\{Ve^{j\omega t} \} \tag{2}
\] where \(V=Ae^{j\phi}\) is the
phasor.
\((1)\) and \((2)\) are the same
Phasor Model of a Resistor
A linear resistor is defined by the equation \(v = Ri\)
Now, assume that the resistor current is described with the
phasor\(I\). Then \[
i(t) = \Re\{Ie^{j\omega t}\}
\]\(R\) is a real constant, and
so the voltage can be computed to be \[
v(t) = R\Re\{Ie^{j\omega t}\} = \Re\{RIe^{j\omega t}\} =
\Re\{Ve^{j\omega t}\}
\] where \(V\) is the phasor
representation for \(v\), i.e. \[
V = RI
\]
Thus, given the phasor for the current we can directly
compute the phasor for the voltage across the
resistor.
Similarly, given the phasor for the voltage across a
resistor we can compute the phasor for the current through the
resistor using \(I =
\frac{V}{R}\)
Phasor Model of a Capacitor
A linear capacitor is defined by the equation \(i=C\frac{dv}{dt}\)
Now, assume that the voltage across the capacitor is described with
the phasor\(V\). Then \[
v(t) = \Re\{ V e^{j\omega t}\}
\]\(C\) is a real constant
\[
i(t) = C\Re\{\frac{d}{dt}V e^{j\omega t}\} = \Re\{j\omega C V e^{j\omega
t}\}
\] The phasor representation for \(i\) is \(i(t) =
\Re\{Ie^{j\omega t}\}\), that is \(I =
j\omega C V\)
Thus, given the phasor for the voltage across a
capacitor we can directly compute the phasor for the current
through the capacitor.
Similarly, given the phasor for the current through a
capacitor we can compute the phasor for the voltage across the
capacitor using \(V=\frac{I}{j\omega
C}\)
Phasor Model of an Inductor
A linear inductor is defined by the equation \(v=L\frac{di}{dt}\)
Now, assume that the inductor current is described with the
phasor\(I\). Then \[
i(t) = \Re\{ I e^{j\omega t}\}
\]\(L\) is a real constant, and
so the voltage can be computed to be \[
v(t) = L\Re\{\frac{d}{dt}I e^{j\omega t}\} = \Re\{j\omega L I e^{j\omega
t}\}
\] The phasor representation for \(v\) is \(v(t) =
\Re\{Ve^{j\omega t}\}\), that is \(V =
j\omega L I\)
Thus, given the phasor for the current we can directly
compute the phasor for the voltage across the
inductor.
Similarly, given the phasor for the voltage across an inductor we
can compute the phasor for the current through the inductor using \(I=\frac{V}{j\omega L}\)
Impedance and Admittance
Impedance and admittance are generalizations of resistance and
conductance.
They differ from resistance and conductance in that they are complex
and they vary with frequency.
Impedance is defined to be the ratio of the phasor for the
voltage across the component and the current through the component:
\[
Z = \frac{V}{I}
\]
Impedance is a complex value. The real part of the impedance is
referred to as the resistance and the imaginary part is referred to as
the reactance
For a linear component, admittance is defined to be the ratio of the
phasor for the current through the component and the voltage
across the component: \[
Y = \frac{I}{V}
\]
Admittance is a complex value. The real part of the admittance is
referred to as the conductance and the imaginary part is referred to as
the susceptance.
Response to Complex
Exponentials
The response of an LTI system to a complex
exponential input is the same complex
exponential with only a change in amplitude
where \(h(n)\) is the impulse
response of a discrete-time LTI system
convolution sum is used here
The signals of the form \(e^{st}\)
in continuous time and \(z^{n}\) in
discrete time, where \(s\) and \(z\) are complex numbers are
referred to as an eigenfunction of the system, and the
amplitude factor\(H(s)\),
\(H(z)\) is referred to as the system's
eigenvalue
Laplace transform
One of the important applications of the Laplace transform is in the
analysis and characterization of LTI systems, which
stems directly from the convolution property\[
Y(s) = H(s)X(s)
\] where \(X(s)\), \(Y(s)\), and \(H(s)\) are the Laplace transforms
of the input, output, and impulse response of the system,
respectively
From the response of LTI systems to complex exponentials, if the
input to an LTI system is \(x(t) =
e^{st}\), with \(s\) the ROC of
\(H(s)\), then the output will be \(y(t)=H(s)e^{st}\); i.e., \(e^{st}\) is an eigenfunction of
the system with eigenvalue equal to the Laplace
transform of the impulse response.
s-Domain Element Models
Sinusoidal Steady-State
Analysis
Here Sinusoidal means that source excitations have
the form \(V_s\cos(\omega t +\theta)\)
or \(V_s\sin(\omega t+\theta)\)
Steady state mean that all transient behavior of the
stable circuit has died out, i.e., decayed to zero
\(s\)-domain and phasor-domain
Phasor analysis is a technique to find the steady-state
response when the system input is a sinusoid. That is, phasor
analysis is sinusoidal analysis.
Phasor analysis is a powerful technique with which to find the
steady-state portion of the complete response.
Phasor analysis does not find the transient response.
Phasor analysis does not find the complete response.
The beauty of the phasor-domain circuit is that it is described by
algebraic KVL and KCL equations with time-invariant sources, not
differential equations of time
The difference here is that Laplace analysis can also give
us the transient response
The zero-state response is given by \(\mathscr{L^1}[H(s)F(s)]\), for the
arbitrary \(s\)-domain input \(F(s)\)
where \(Z_L(s) = sL\), the inductor
with zero initial current \(i_L(0)=0\)
and \(Z_C(s)=1/sC\) with zero initial
voltage \(v_C(0)=0\)
transient response & steady-state
response
natural response & forced
response
Transfer Functions
and Frequency Response
transfer function
The transfer function\(H(s)\) is the ratio of the Laplace
transform of the output of the system to its input assuming
all zero initial conditions.
frequency response
An immediate consequence of convolution is that an input of
the form \(e^{st}\) results in an
output \[
y(t) = H(s)e^{st}
\] where the specific constant \(s\) may be complex, expressed as \(s = \sigma + j\omega\)
A very common way to use the exponential response of LTIs is
in finding the frequency response i.e. response
to a sinusoid
First, we express the sinusoid as a sum of two
exponential expressions (Eulerโs relation): \[
\cos(\omega t) = \frac{1}{2}(e^{j\omega t}+e^{-j\omega t})
\] If we let \(s=j\omega\), then
\(H(-j\omega)=H^*(j\omega)\), in polar
form \(H(j\omega)=Me^{j\phi}\) and
\(H(-j\omega)=Me^{-j\phi}\). \[\begin{align}
y_+(t) & = H(s)e^{st}|_{s=j\omega} = H(j\omega)e^{j\omega t} = M
e^{j(\omega t + \phi)} \\
y_-(t) & = H(s)e^{st}|_{s=-j\omega} = H(-j\omega)e^{-j\omega t} = M
e^{-j(\omega t + \phi)}
\end{align}\]
By superposition, the response to the sum of these two
exponentials, which make up the cosine signal, is the sum of the
responses \[\begin{align}
y(t) &= \frac{1}{2}[H(j\omega)e^{j\omega t} + H(-j\omega)e^{-j\omega
t}] \\
&= \frac{M}{2}[e^{j(\omega t + \phi)} + e^{-j(\omega t + \phi)}] \\
&= M\cos(\omega t + \phi)
\end{align}\]
where \(M = |H(j\omega|\) and \(\phi = \angle H(j\omega)\)
This means if a system represented by the transfer function \(H(s)\) has a sinusoidal input, the
output will be sinusoidal at the same frequency with magnitude
\(M\) and will be shifted in phase by
the angle \(\phi\)
Laplace transform vs.
Fourier transform
Laplace transforms such as \(Y(s)=H(s)U(s)\) can be used to study the
complete response characteristics of systems, including
the transient responseโthat is, the time response to an
initial condition or suddenly applied signal
This is in contrast to the use of Fourier transforms, which
only take into account the steady-state response
Given a general linear system with transfer function \(H(s)\) and an input signal \(u(t)\), the procedure for determining \(y(t)\) using the Laplace transform
is given by the following steps:
FSM and Shift Register of DR and IR works at the
posedge of the clock
TMS, TDI, TDO and Hold Register of DR and IR changes value at the
negedge of the clock
capture IR 01, the fixed is for easier fault
detection
After power-up, they may not be in sync, but there is a trick. Look
at the state machine and notice that no matter what state you are, if
TMS stays at "1" for five clocks, a TAP controller goes back to
the state "Test-Logic Reset". That's used to synchronize the TAP
controllers.
It is important to note that in a typical Boundary-Scan test, the
time between launching a signal from driver (at the falling edge of test
clock (TCK) in the Update-DR or Update-IR TAP
Controller state) and capturing that signal (at the rising edge of TCK
in the Caputre-DR TAP Controller state) is no less
tha 2.5 TCK cycles
Further, the time between successive launches on a driver is governed
- not only by the TCk rate - but by the amount of serial data shifting
needed to load the next pattern data in the concatenated Boundary-Scan
Registers of the Boundary-Scan chain
Thus the effective test data rate of a driver could be thousands of
the times lower than the TCK rate
For DC-coupled interconnect, this time is of no concern
For AC-coupled interconnect, the signal may easily decay partially
or completely before it can be captured
If only partial decay occurs before capture, that decay will very
likely be completed before the driver produces the next edge
AC-coupling
In general, AC-coupling can distort a signal transmitted across a
channel depending on its frequency.
Figure 5
The high frequency signal is relatively unaffected by the
coupling
The low frequency signal is severely impacted
it decays to \(V_T\) after a few
time constants
its amplitude is double the input amplitude > transient response,
before AC-coupling capacitor: \(-A_p \to
A_p\); after AC-coupling capacitor \(V_T \to V_T+2A_p\) > A key item to note
is that the transitions in the original signal are preserved, although
their start and end points are offset > > compared to where they
were in the high frequency
Test signal implementation
The test data is either the content of the Boundary-Scan Register
Update latch (U) when executing the (DC) EXTEST instruction, or an
"AC Signal" when an AC testing instruction is
loaded into the device.
The AC signal is a test waveform suited for transmission through
AC-coupling
Test signal reception
When an AC testing instruction is loaded, a specialized
test receiver detects transitoins of the AC signal seen at the input and
determines if this represents a logic '0' or '1'
When EXTEST is loaded, the input signal level is detected
and sent to the output of the test receiver to the Boundary-Scan
Register cell
When testing for a shorted capacitor, the test software must
ensure that enough time has passed for the signal to decay before
entering Capture-DR, either by stopping TCk or by spending
additional TCK cycles in the Run-Test/Idle TAP Controller
state
EXTEST_PULSE & EXTEST_TRAIN
The two new AC-test instructions provided by this standard differ
primarily in the number and timing of transitions to provide flexibility
in dealing with the specific dynamic behavior of the channels being
tested
AC Test Signal essentially modulates test data so
that it will propagate through AC-coupled channels, for devices that
contatin AC pins
Tools should use the EXTEST_PULSE instruction unless
there is a specific requirement for the EXTEST_TRAIN
instruction
EXTEST_PULSE
Generate two additional driver transitions and
allows a tester to vary the time between them dependent
on how many TCK cycles the TAP is left in the Run-Test/Idle TAP
Controller state.
This is intended to allow any undesired transient condition to decay
to a DC steady-state value when that will make the
final transition more reliably detectable
The duration in the Run-Test/Idle TAP Controller state
should be at least three times the high-pass coupling
time constant. This allows the first additional transition to
decay away to the DC steady-state value for the
channel, and ensures that the full amplitude of the final
transition is added to or subtracted from that steady-state
value
This establishes a known initial condition for the final
transition and permits reliable specification of the detection
threshold of the test receiver
EXTEST_TRAIN
Generate multiple additional transitions, the
number dependent on how long the TAP is left in the
Run-Test/Idle TAP Controller state
This is intended to allow any undesired transient condition to decay
to an AC steady-state value when that will make the
final transition more reliably detectable
IEEE Std 1149.6-2003
This standard is built on top of IEEE Std 1149.1 using the same Test
Access Port structure and Boundary-Scan architecture.
It adds the concept of a "test receiver" to input pins that are
expected to handle differential and/or AC-coupling
It adds two new instructions that cause drivers to emit AC waveforms
that are processed by test receivers.
JTAG Instruction
Implementation
AC mode hysteresis, detect transistion
DC mode threshold is determined by jtag initial value
reference
IEEE Std 1149.1-2001, IEEE Standard Test Access Port and
Boundary-Scan Architecture, IEEE, 2001
IEEE Std 1149.6-2003, IEEE Standard for BoundaryScan Testing of
Advanced Digital Networks, IEEE, 2003
B. Eklow, K. P. Parker and C. F. Barnhart, "IEEE 1149.6: a
boundary-scan standard for advanced digital networks," in IEEE Design
& Test of Computers, vol. 20, no. 5, pp. 76-83, Sept.-Oct. 2003,
doi: 10.1109/MDT.2003.1232259.
The periodogram is in fact the Fourier transform of the aperiodic
correlation of the windowed data sequence
estimating
continuous-time stationary random signal
The sequence \(x[n]\) is typically
multiplied by a finite-duration window \(w[n]\), since the input to the DFT must be
of finite duration. This produces the finite-length
sequence \(v[n] = w[n]x[n]\)
That is, by \((1)\)\[
\hat{P}_{ss}(\Omega) = T_s\hat{P}_{xx(\omega)} =
\frac{T_s|V(e^{j\omega})|^2}{\sum_{n=0}^{L-1}(w[n])^2}=\frac{|V(e^{j\omega})|^2}{f_{res}L\sum_{n=0}^{L-1}(w[n])^2}
\]
That is, by \((2)\)\[
\hat{P}_{ss}(\Omega) = T_s\hat{P}_{xx(\omega)} =
\frac{T_sL|V(e^{j\omega})|^2}{\sum_{k=0}^{L-1}(W[k])^2} =
\frac{|V(e^{j\omega})|^2}{f_{res}\sum_{k=0}^{L-1}(W[k])^2}
\]
!! ENBW
Wiener-Khinchin theorem
Norbert Wiener proved this theorem for the case of a
deterministic function in 1930; Aleksandr
Khinchin later formulated an analogous result for stationary
stochastic processes and published that probabilistic
analogue in 1934. Albert Einstein explained, without proofs, the idea in
a brief two-page memo in 1914
\(x(t)\), Fourier transform over a
limited period of time \([-T/2, +T/2]\)
, \(X_T(f) = \int_{-T/2}^{T/2}x(t)e^{-j2\pi
ft}dt\)
With Parseval's theorem\[
\int_{-T/2}^{T/2}|x(t)|^2dt = \int_{-\infty}^{\infty}|X_T(f)|^2df
\] So that \[
\frac{1}{T}\int_{-T/2}^{T/2}|x(t)|^2dt =
\int_{-\infty}^{\infty}\frac{1}{T}|X_T(f)|^2df
\]
where the quantity, \(\frac{1}{T}|X_T(f)|^2\) can be interpreted
as distribution of power in the frequency domain
For each \(f\) this quantity is a
random variable, since it is a function of the random process \(x(t)\)
The power spectral density (PSD) \(S_x(f
)\) is defined as the limit of the expectation of the expression
above, for large \(T\): \[
S_x(f) = \lim _{T\to \infty}\mathrm{E}\left[ \frac{1}{T}|X_T(f)|^2
\right]
\]
The Wiener-Khinchin theorem ensures that for well-behaved
wide-sense stationary processes the limit
exists and is equal to the Fourier transform of the
autocorrelation\[\begin{align}
S_x(f) &= \int_{-\infty}^{+\infty}R_x(\tau)e^{-j2\pi f \tau}d\tau \\
R_x(\tau) &= \int_{-\infty}^{+\infty}S_x(f)e^{j2\pi f \tau}df
\end{align}\]
Note: \(S_x(f)\) in Hz and
inverse Fourier Transform in Hz (\(\frac{1}{2\pi}d\omega = df\))
\[
\frac{1}{2\pi}F^{-1}\{R_{xx}\}d\omega =
\frac{1}{2\pi}F^{-1}\{R_{xx}\}d(2\pi f T)=T\cdot F^{-1}\{R_{xx}\}df =
P_{xx}(f)df
\] power spectral density of a discrete-time
random process \(\{x(n)\}\) is
given by \[
P_{xx}(f) =T\cdot F^{-1}\{R_{xx}\}
\]
The steady-state response is the response that results after any
transient effects have dissipated.
The large signal solution is the starting point for
small-signal analyses, including periodic AC, periodic transfer
function, periodic noise, periodic stability, and periodic scattering
parameter analyses.
Designers refer periodic steady state analysis in time
domain as "PSS" and corresponding frequency
domain notation as "HB"
Harmonic Balance Analysis
The idea of harmonic balance is to find a set of port voltage
waveforms (or, alternatively, the harmonic voltage components) that give
the same currents in both the linear-network equations and
nonlinear-network equations
that is, the currents satisfy Kirchoff's current law
Define an error function at each harmonic,
\(f_k\), where \[
f_k = I_{\text{LIN}}(k\omega) + I_{\text{NL}}(k\omega)
\] where \(k=0, 1, 2,...,K\)
Note that each \(f_k\) is implicitly
a function of all voltage components \(V(k\omega)\)
Newton
Solution of the Harmonic-Balance Equation
Iterative Process
and Jacobian Formulation
The elements of the Jacobian are the derivatives \[
\frac{\partial F_{\text{n,k}}}{\partial _{V_\text{m,l}}}
\] where \(n\) and \(m\) are the port indices \((1,N)\), and \(k\) and \(l\) are the harmonic indices \((0,...,K)\)
Number of Harmonics & Time
Samples
Initial Estimate
One important property of Newton's method is that its speed and
reliability of convergence depend strongly upon the initial estimate of
the solution vector.
Conversion Matrix Analysis
Large-signal/small-signal analysis, or
conversion matrix analysis, is useful for a large class
of problems wherein a nonlinear device is driven, or
"pumped" by a single large sinusoidal signal; another
signal, much smaller, is applied; and we seek only the linear
response to the small signal.
The most common application of this technique is in the design of
mixers and in nonlinear noise analysis
First, analyzing the nonlinear device under
large-signal excitation only, where the harmonic-balance method can be
applied
Then, the nonlinear elements in the device's
equivalent circuit are then linearized to create
small-signal, linear, time-varying elements
Finally, a small-signal analysis is performed
Element Linearized
Below shows a nonlinear resistive element, which has
the \(I/V\) relationship \(I=f(V)\). It is driven by a
large-signal voltage
Assuming that \(V\) consists of the
sum of a large-signal component \(V_0\)
and a small-signal component \(v\),
with Taylor series\[
f(V_0+v) = f(V_0)+\frac{d}{dV}f(V)|_{V=V_0}\cdot
v+\frac{1}{2}\frac{d^2}{dV^2}f(V)|_{V=V_0}\cdot v^2+...
\] The small-signal, incremental current is found by subtracting
the large-signal component of the current \[
i(v)=I(V_0+v)-I(V_0)
\] If \(v \ll V_0\), \(v^2\), \(v^3\),... are negligible. Then, \[
i(v) = \frac{d}{dV}f(V)|_{V=V_0}\cdot v
\]
\(V_0\) need not be a DC
quantity; it can be a time-varying large-signal voltage\(V_L(t)\) and that \(v=v(t)\), a function of time. Then \[
i(t)=g(t)v(t)
\] where \(g(t)=\frac{d}{dV}f(V)|_{V=V_L(t)}\)
The time-varying conductance \(g(t)\), is the derivative of the element's
\(I/V\) characteristic at the
large-signal voltage
By an analogous derivation, one could have a current-controlled
resistor with the \(V/I\)
characteristic \(V = f_R(I)\) and
obtain the small-signal\(v/i\) relation \[
v(t) = r(t)i(t)
\] where \(r(t) =
\frac{d}{dI}f_R(I)|_{I=I_L(t)}\)
A nonlinear element excited by two tones
supports currents and voltages at mixing frequencies \(m\omega_1+n\omega_2\), where \(m\) and \(n\) are integers. If one of those tones,
\(\omega_1\) has such a low
level that it does not generate harmonics and the other is a
large-signal sinusoid at \(\omega_p\),
then the mixing frequencies are \(\omega=\pm\omega_1+n\omega_p\), which shown
in below figure
A more compact representation of the mixing frequencies is \[
\omega_n=\omega_0+n\omega_p
\] which includes only half of the mixing frequencies:
the negative components of the lower sidebands
(LSB)
and the positive components of the upper sidebands
(USB)
For real signal, positive- and negative-frequency components are
complex conjugate pairs
Shooting Newton
TODO ๐
Nonlinearity &
Linear Time-Varying Nature
Nonlinearity Nature
The nonlinearity causes the signal to be replicated at multiples of
the carrier, an effect referred to as harmonic
distortion, and adds a skirt to the signal that increases its
bandwidth, an effect referred to as intermodulation
distortion
It is possible to eliminate the effect of harmonic
distortion with a bandpass filter, however the frequency of the
intermodulation distortion products overlaps the frequency of
the desired signal, and so cannot be completely removed with
filtering.
Time-Varying Linear Nature
linear with respect to \(v_{in}\) and
time-varying
Given \(v_{in}(t)=m(t)\cos (\omega_c
t)\) and LO signal of \(\cos(\omega_{LO} t)\), then \[
v_{out}(t) = \text{LPF}\{m(t)\cos(\omega_c t)\cdot \cos(\omega_{LO} t)\}
\] and \[
v_{out}(t) = m(t)\cos((\omega_c - \omega_{LO})t)
\]
A linear periodically-varying transfer function implements
frequency translation
Periodic small signal
analyses
Analysis in Simulator
LPV analyses start by performing a periodic analysis to compute the
periodic operating point with only the large
clock signal applied (the LO, the clock, the carrier,
etc.).
The circuit is then linearized about this time-varying
operating point (expand about the periodic equilibrium point
with a Taylor series and discard all but the first-order term)
and the small information signal is applied. The
response is calculated using linear time-varying analysis
Versions of this type of small-signal analysis exists for both
harmonic balance and shooting methods
PAC is useful for predicting the output sidebands produced by a
particular input signal
PXF is best at predicting the input images for a particular
output
Linear Time Varying
The response of a relaxed LTV system at a time \(t\) due to an impulse applied at a time
\(t โ \tau\) is denoted by \(h(t, \tau)\)
The first argument in the impulse response denotes the time of
observation.
The second argument indicates that the system was excited by an
impulse launched at a time \(\tau\)prior to the time of observation.
Thus, the response of an LTV system not only depends on how long
before the observation time it was excited by the impulse but also on
the observation instant.
The output \(y(t)\) of an initially
relaxed LTV system with impulse response \(h(t, \tau)\) is given by the convolution
integral \[
y(t) = \int_0^{\infty}h(t,\tau)x(t-\tau)d\tau
\] Assuming \(x(t) = e^{j2\pi f
t}\)\[
y(t) = \int_0^{\infty}h(t,\tau)e^{j2\pi f (t-\tau)}d\tau = e^{j2\pi f
t}\int_0^{\infty}h(t,\tau)e^{-j2\pi f\tau}d\tau
\] The (time-varying) frequency response can be
interpreted as \[
H(j2\pi f, t) = \int_0^{\infty}h(t,\tau)e^{-j2\pi f\tau}d\tau
\] Linear Periodically Time-Varying (LPTV) Systems, which is a
special case of an LTV system whose impulse response satisfies \[
h(t, \tau) = h(t+T_s, \tau)
\] In other words, the response to an impulse remains unchanged
if the time at which the output is observed (\(t\)) and the time at which the impulse is
applied (denoted by \(t_1\)) are both
shifted by \(T_s\)\[
H(j2\pi f, t+T_s) = \int_0^{\infty}h(t+T_s,\tau)e^{-j2\pi f\tau}d\tau =
\int_0^{\infty}h(t,\tau)e^{-j2\pi f\tau}d\tau = H(j2\pi f, t)
\]\(H(j2\pi f, t)\) of an LPTV
system is periodic with timeperiod \(T_s\), it can be expanded as a Fourier
series in \(t\), resulting in \[
H(j2\pi f, t) = \sum_{k=-\infty}^{\infty} H_k(j2\pi f)e^{j2\pi f_s k t}
\] The coefficients of the Fourier series \(H_k(j2\pi f)\) are given by \[
H_k(j2\pi f) = \frac{1}{T_s}\int_0^{T_s} H(j2\pi f, t) e^{-j2\pi k f_s
t}dt
\]
reference
K. S. Kundert, "Introduction to RF simulation and its application,"
in IEEE Journal of Solid-State Circuits, vol. 34, no. 9, pp. 1298-1319,
Sept. 1999, doi: 10.1109/4.782091. [pdf]
Stephen Maas, Nonlinear Microwave and RF Circuits, Second Edition ,
Artech, 2003. [pdf]
Karti Mayaram. ECE 521 Fall 2016 Analog Circuit Simulation:
Simulation of Radio Frequency Integrated Circuits [pdf1,
pdf2]
Shanthi Pavan, "Demystifying Linear Time Varying Circuits"
S. Pavan and G. C. Temes, "Reciprocity and Inter-Reciprocity: A
Tutorialโ Part I: Linear Time-Invariant Networks," in IEEE Transactions
on Circuits and Systems I: Regular Papers, vol. 70, no. 9, pp.
3413-3421, Sept. 2023, doi: 10.1109/TCSI.2023.3276700.
S. Pavan and G. C. Temes, "Reciprocity and Inter-Reciprocity: A
TutorialโPart II: Linear Periodically Time-Varying Networks," in IEEE
Transactions on Circuits and Systems I: Regular Papers, vol. 70, no. 9,
pp. 3422-3435, Sept. 2023, doi: 10.1109/TCSI.2023.3294298.
S. Pavan and R. S. Rajan, "Interreciprocity in Linear Periodically
Time-Varying Networks With Sampled Outputs," in IEEE Transactions on
Circuits and Systems II: Express Briefs, vol. 61, no. 9, pp. 686-690,
Sept. 2014, doi: 10.1109/TCSII.2014.2335393.
Piet Vanassche, Georges Gielen, and Willy Sansen. 2009. Systematic
Modeling and Analysis of Telecom Frontends and their Building Blocks
(1st. ed.). Springer Publishing Company, Incorporated.
Wereley, Norman. (1990). Analysis and control of linear periodically
time varying systems.
Hameed, S. (2017). Design and Analysis of Programmable Receiver
Front-Ends Based on LPTV Circuits. UCLA. ProQuest ID:
Hameed_ucla_0031D_15577. Merritt ID: ark:/13030/m5gb6zcz. Retrieved from
https://escholarship.org/uc/item/51q2m7bx
Rubiola, E. (2008). Phase Noise and Frequency Stability in
Oscillators (The Cambridge RF and Microwave Engineering Series).
Cambridge: Cambridge University Press. doi:10.1017/CBO9780511812798
Nicola Da Dalt and Ali Sheikholeslami. 2018. Understanding Jitter and
Phase Noise: A Circuits and Systems Perspective (1st. ed.). Cambridge
University Press, USA.
Hueber, G., & Staszewski, R. B. (Eds.) (2010).
Multi-Mode/Multi-Band RF Transceivers for Wireless Communications:
Advanced Techniques, Architectures, and Trends. John Wiley &
Sons. https://doi.org/10.1002/9780470634455
G. Richmond, "Refclk Fanout Best Practices for 8GT/s and 16GT/s
Systems," PCI-SIG Developers Conference, June 7, 2017
Knowing how input phase noise aliases when
sampled by a PLL
An alternate view of phase noise aliasing during the sampling
process
Instead of mirroring the jitter-transfer function
located below \(F_S/2\) across spectral
boundaries located at integer multiples of \(F_S/2\) (i.e. 50 MHz) as shown in Figure 2
(a)
we could alternatively fold the portion of the Raw Data
curve located above \(F_S/2\)
across these spectrum boundaries to appear below \(F_S/2\) as shown in Figure 2 (b)
Integrating the combined area under each Filtered Data curve
shown in Figure 2 (b) is mathematically equivalent to
integrating the entire Filtered Data curve shown in Figure 2 (a)
Phase Noise Analyzer vs TIE jitter using
Real-time Oscilloscope
Since an oscilloscope observes jitter similar to a real
system, we regard its result as the gold
standard against which other methods may be judged
Flat Phase Noise Extension to twice the clock
frequency
Phase Noise Aliasing
& Integration Limits
These two types of measurements deliver the same rms
jitter of \(f_{CK}\)
both rising and falling: integrated from \(-f_{CK}\) to \(+f_{CK}\)
only the rising (or falling) edges: integrated from \(-f_{CK}/2\) to \(+f_{CK}/2\)
temporal autocorrelation and Wiener-Khinchin
theorem is more appropriate to arise rms value
build the abs_jitter function with seconds as the Y
axis and add the stddev function to determine the Jee
jitter value
or integrate psd
The RMS \(x_{\text{RMS}}\) of a
discrete domain signal \(x(n)\) is
given by \[
x_{\text{RMS}}=\sqrt{\frac{1}{N}\sum_{n=0}^{N-1}|x(n)|^2}
\] Inserting Parseval's theorem given by \[
\sum_{n=0}^{N-1}|x(n)|^2=\frac{1}{N}\sum_{n=0}^{N-1}|X(k)|^2
\] allows for computing the RMS from the spectrum \(X(k)\) as \[
x_{\text{RMS}}=\sqrt{\frac{1}{N^2}\sum_{n=0}^{N-1}|X(k)|^2}
\]
Remarks
Cadence Spectre's PN function may call
abs_jitter and psd function under the
hood.
Phase Noise in vsource
Suppose pnoise result of one block is shown as below, and the result
is stimulus of following block
First export Output Noise and
Edge Phase Noise๏ผ then select noiseModelType
and noisefile respectively
Under vsource (Source type: pulse) with
different amplitude & rising/falling time, simulation result
demonstrate that Edge Phase Noise(dBc) maintain
jitter or phase noise by tweaking voltage noise at edge
under the hoods, however Noise Voltage(V^2/Hz)
maintain voltage noise
In the conclusion, Edge Phase Noise(dBc) is
preferred for phase noise evaluation
notice:
@(#)$CDS: spectre version 21.1.0 64bit 12/01/2023 07:24 (csvcm36c-1) $
@(#)$CDS: virtuoso version ICADVM20.1-64b 10/11/2023 09:26 (cpgbld01) $
SSB Phase Noise (dBc)
Divider PN simulation
Cadence Support. "How to set up pss/pnoise when simulating a driven
circuit or a VCO, both containing dividers"
Cadence, Application Note: Understanding the relations between
time-average noise (phase-noise) and sampled noise (edge-phase noise or
jitter) in Pnoise analysis