image-20241208103218870


noise power at filter output

Chembian Thambidurai, "Comparison Of Noise Power At Lowpass Filter Output" [link]

—, "On Noise Power At The Bandpass Filter Output" [link]

—, "Integrated Power of Thermal and Flicker Noise" [link]

TODO 📅

Pulsed Noise Signals

Chembian Thambidurai, "Power Spectral Density of Pulsed Noise Signals" [link]

image-20241208075822212

Above, the output of the multiplier be \(y(t)\) is passed through a ideal brick wall low pass filter with a bandwidth of \(f_0/2\)

When a random signal is multiplied by a pulse function, the resulting signal becomes a cyclo-stationary random process.

As rule of thumb, the spectrum of such a pulsed noise signal

  • thermal noise is multiplied by \(D\)

  • flicker noise is multiplied by \(D^2\),

where \(D\) is the duty cycle of the pulse signal

image-20241208111744647


banlimited input

image-20241208113904927

wideband white noise input

image-20241208114442705

flicker noise input

with \(S_x(f)=\frac{K_f}{f}\)

image-20241208121027250

image-20241208121402724

Assuming \(\Delta f \ll f_0\)

image-20241208121645637


image-20241208111506517

Sampling Noise

Chembian Thambidurai, "Noise, Sampling and Zeta Functions" [link]

A random signal \(v_n(t)\) is sampled using an ideal impulse sampler

image-20241201165157743

TODO 📅

ADC SNR & clock jitter

Chembian Thambidurai, "SNR of an ADC in the presence of clock jitter" [https://www.linkedin.com/posts/chembiyan-t-0b34b910_adcsnrjitter-activity-7171178121021304833-f2Wd?utm_source=share&utm_medium=member_desktop]

Unlike the quantization noise and the thermal noise, the impact of the clock jitter on the ADC performance depends on the input signal properties like its PSD

image-20241123205352661

The error between the ideal sampled signal and the sampling with clock jitter can be treated as noise and it results in the degradation of the SNR of the ADC

image-20241124004634365

For sinusoid input:

image-20241210235817281

K. Tyagi and B. Razavi, "Performance Bounds of ADC-Based Receivers Due to Clock Jitter," in IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 70, no. 5, pp. 1749-1753, May 2023 [https://www.seas.ucla.edu/brweb/papers/Journals/KT_TCAS_2023.pdf]

N. Da Dalt, M. Harteneck, C. Sandner and A. Wiesbauer, "On the jitter requirements of the sampling clock for analog-to-digital converters," in IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 49, no. 9, pp. 1354-1360, Sept. 2002 [https://sci-hub.se/10.1109/TCSI.2002.802353]

M. Shinagawa, Y. Akazawa and T. Wakimoto, "Jitter analysis of high-speed sampling systems," in IEEE Journal of Solid-State Circuits, vol. 25, no. 1, pp. 220-224, Feb. 1990 [https://sci-hub.se/10.1109/4.50307]


image-20241222140258960

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import numpy as np
import matplotlib.pyplot as plt

ENOB = 8
fin = np.logspace(8, 11, 60)

# quantization noise: SNR = 6.02*ENOB + 1.76 dB
Ps_PnQ = 10**((6.02*ENOB + 1.76)/10)
PnQ = 1/Ps_PnQ

# jitter noise: SNR = 6 - 20log10(2*pi*fin*Jrms) dB @ref. Chembiyan T
Jrms_list = [25e-15, 50e-15, 100e-15, 250e-15, 500e-15, 1000e-15]
for Jrms in Jrms_list:
# Ps_PnJ_lcl = 10**((6-20*np.log10(2*np.pi*fin*Jrms))/10) # ref. Chembiyan T
Ps_PnJ_lcl = 10**((0 - 20 * np.log10(2 * np.pi * fin * Jrms)) / 10) # ref. Nicola Da Dalt
PnJ_lcl = 1/Ps_PnJ_lcl
SNR_lcl = 10*np.log10(1/(PnQ+PnJ_lcl))
plt.plot(fin, SNR_lcl, label=r'$\sigma_{jitter}$'+'='+str(int(Jrms*1e15))+'fs')

plt.xscale('log')
plt.ylim([0, 55])
plt.xlabel(r'$f_{in}$ [Hz]')
plt.ylabel(r'SNR [dB]')
plt.grid(which='both')
# plt.title(r'ref. Chembiyan T')
plt.title(r'ref. Nicola Da Dalt')
plt.legend()
plt.show()

i.e. N. Da Dalt's

image-20241210232716862

Ayça Akkaya, "High-Speed ADC Design and Optimization for Wireline Links" [https://infoscience.epfl.ch/server/api/core/bitstreams/96216029-c2ff-48e5-a675-609c1e26289c/content]


image-20241222135948195


image-20250525134220901

image-20250525135503671

\[\begin{align} \text{SNR}_\text{ADC}[\text{dB}] &= -20\cdot \log \sqrt{\left(10^{-\frac{\text{SNR}_\text{Quantization Noise}}{20}}\right)^2 + \left(10^{-\frac{\text{SNR}_\text{Jitter}}{20}}\right)^2} \\ &= -10\cdot \log \left(\left(10^{-\frac{\text{SNR}_\text{Quantization Noise}}{20}}\right)^2 + \left(10^{-\frac{\text{SNR}_\text{Jitter}}{20}}\right)^2\right) \\ &= -10\cdot \log \left(\left(10^{-\frac{10\log(\frac{3\times2^{2N}}{2})}{20}}\right)^2 + \left(10^{-\frac{-20\log{(2\pi f_\text{in}\sigma_\text{jitter})}}{20}}\right)^2\right) \\ &= -10\cdot \log \left( \frac{2}{3\times 2^{2N}} + (2\pi f_\text{in}\sigma_\text{jitter})^2 \right) \end{align}\]

Ayça Akkaya, "High-Speed ADC Design and Optimization for Wireline Links" [https://infoscience.epfl.ch/server/api/core/bitstreams/96216029-c2ff-48e5-a675-609c1e26289c/content]

CC Chen, Why Absolute Jitter Matters for ADCs & DACs? [https://youtu.be/jBgDDFFDq30?si=XFyTEfApN86Ef-RG]

Thomas Neu, TIPL 4704. Jitter vs SNR for ADCs [https://www.ti.com/content/dam/videos/external-videos/en-us/2/3816841626001/5529003238001.mp4/subassets/TIPL-4704-Jitter-vs-SNR.pdf]

image-20250525141523199

image-20250525143507747

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
import numpy as np
import matplotlib.pyplot as plt

N = 12 # ADC bit
fin = 100e6 # the frequency of the sinusoidal input signal
jrms = np.linspace(0, 10, 1000)*1e-12 #ps

# ADC (SNR) with Quantization Noise & Jitter degradation
SNR_ADC = -10 * np.log10(10**(-np.log10(3*2**(2*N)/2)) + (2*np.pi*fin*jrms)**2)
ENOB = (SNR_ADC - 1.76) / 6.02

plt.plot(jrms*1e12, ENOB, label='100 MHZ input')
plt.plot([0, 10], [12, 12], '--', label='12-bit limit')
plt.plot([0, 10], [6, 6], '--', label='6-bit limit')

plt.xscale('linear')
plt.xlim([0, 10])
plt.ylim([5, 15])
plt.xlabel('RMS Jitter (ps)')
plt.ylabel('Effective Number of Bits (ENOB')
plt.grid(which='both')
plt.title('ENOB vs. RMS Clock Jitter (100 MHz)')
plt.legend()
plt.show()

Sampled Thermal Noise

The aliasing of the noise, or noise folding, plays an important role in switched-capacitor as it does in all switched-capacitor filters

image-20240425215938141

Assume for the moment that the switch is always closed (that there is no hold phase), the single-sided noise density would be

image-20240428182816109


image-20240428180635082

\(v_s[n]\) is the sampled version of \(v_{RC}(t)\), i.e. \(v_s[n]= v_{RC}(nT_C)\) \[ S_s(e^{j\omega}) = \frac{1}{T_C} \sum_{k=-\infty}^{\infty}S_{RC}(j(\frac{\omega}{T_C}-\frac{2\pi k}{T_C})) \cdot d\omega \] where \(\omega \in [-\pi, \pi]\), furthermore \(\frac{d\omega}{T_C}= d\Omega\) \[ S_s(j\Omega) = \sum_{k=-\infty}^{\infty}S_{RC}(j(\Omega-k\Omega_s)) \cdot d\Omega \]

image-20240428215559780

image-20240425220033340

The noise in \(S_{RC}\) is a stationary process and so is uncorrelated over \(f\) allowing the \(N\) rectangles to be combined by simply summing their noise powers

image-20240428225949327

image-20240425220400924

where \(m\) is the duty cycle


Below analysis focus on sampled noise

image-20240427183257203

image-20240427183349642

image-20240427183516540

image-20240427183458649

  • Calculate autocorrelation function of noise at the output of the RC filter
  • Calculate the spectrum by taking the discrete time Fourier transform of the autocorrelation function

image-20240427183700971

Kundert, Ken. (2006). Simulating Switched-Capacitor Filters with SpectreRF [https://designers-guide.org/analysis/sc-filters.pdf]

Pavan, Schreier and Temes, "Understanding Delta-Sigma Data Converters, Second Edition" ISBN 978-1-119-25827-8

Boris Murmann, EE315B VLSI Data Conversion Circuits, Autumn 2013

- Noise Analysis in Switched-Capacitor Circuits, ISSCC 2011 / tutorials [slides, transcript]

Tania Khanna, ESE568 Fall 2019, Mixed Signal Circuit Design and Modeling URL: https://www.seas.upenn.edu/~ese568/fall2019/

Matt Pharr, Wenzel Jakob, and Greg Humphreys. 2016. Physically Based Rendering: From Theory to Implementation (3rd. ed.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.

Bernhard E. Boser . Advanced Analog Integrated Circuits Switched Capacitor Gain Stages [https://people.eecs.berkeley.edu/~boser/courses/240B/lectures/M05%20SC%20Gain%20Stages.pdf]

R. Gregorian and G. C. Temes. Analog MOS Integrated Circuits for Signal Processing. Wiley-Interscience, 1986

Trevor Caldwell, Lecture 9 Noise in Switched-Capacitor Circuits [http://individual.utoronto.ca/trevorcaldwell/course/NoiseSC.pdf]

Christian-Charles Enz. High precision CMOS micropower amplifiers [pdf]

reference

David Herres, The difference between signal under-sampling, aliasing, and folding URL: https://www.testandmeasurementtips.com/the-difference-between-signal-under-sampling-aliasing-and-folding-faq/

Pharr, Matt; Humphreys, Greg. (28 June 2010). Physically Based Rendering: From Theory to Implementation. Morgan Kaufmann. ISBN 978-0-12-375079-2. Chapter 7 (Sampling and reconstruction)

Alan V Oppenheim, Ronald W. Schafer. Discrete-Time Signal Processing, 3rd edition

活塞环(Piston Ring)

image-20241124132957086

马力 vs 扭矩

image-20241116193639833

扭矩 = 力 x 力臂 (T=FL)

悬挂 (vehicle suspension)

image-20241116194036720

变速箱 (transmission)

image-20241117110054261

变速箱实现转速和扭矩的转换:低档扭矩大,转速慢; 高档扭矩小,转速慢

发动机小齿轮和变速箱大齿轮啮合处的力相同(力的作用是相互的),但是力臂不同,于是实现了扭矩转换

阿克曼转向几何 (Ackerman steering geometry)

image-20241117112328574

缸内直喷 (direct injection)

  • 歧管喷油
  • 缸内直喷
    • 省油
    • 动力更强
    • 喷油时间自由度大

image-20241119205154609

This cascode compensation topology is popularly known as ahuja compensation

The cause of the positive zero is the feedforward current through \(C_m\).

To abolish this zero, we have to cut the feedforward path and create a unidirectional feedback through \(C_m\).

  1. Adding a resistor(nulling resistor) is one way to mitigate the effect of the feedforward current.

  2. Another approach uses a current buffer cascode to pass the small-signal feedback current but cut the feedforward current

People name this approach after the author Ahuja

The benefits of Ahuja compensation over Miller compensation are severa

  • better PSRR

  • higher unity-gain bandwidth using smaller compensation capacitor

  • ability to cope better with heavy capacitive and resistive loads

Miller's approximation

image-20250514201620152

Right-Half-Plane Zero

\[ \left[(v_i - v_o)sC_c - g_m v_i\right]R_o = v_o \] Then \[ \frac{v_o}{v_i} = -g_mR_o\frac{1-s\frac{C_c}{g_m}}{1+sR_oC_c} \] right-half-plane Zero \(\omega _z = \frac{g_m}{C_c}\)

Equivalent cap

The amplifier gain magnitude \(A_v = g_m R_o\) \[ I_\text{c,in} = (v_i - v_o)sC_c \] Then \[\begin{align} I_\text{c,in} &= (v_i + A_v v_i)sC_c \\ & = v_i s (1+A_v)C_c \end{align}\]

we get \(C_\text{in,eq}= (1+A_v)C_c\simeq A_vC_c\)

Similarly \[\begin{align} I_\text{c,out} &= (v_o - v_i)sC_c \\ & = v_o s (1+\frac{1}{A_v})C_c \end{align}\]

we get \(C_\text{out,eq}= (1+\frac{1}{A_v})C_c\simeq C_c\)

cascode compensation

image-20240817193513058

image-20240817201727109

Of course, , if the capacitance at the gate of \(M_1\) is taken into account, pole splitting is less pronounced.


including \(r_\text{o2}\)

image-20240819202642809 \[ \frac{V_{out}}{I_{in}} \approx \frac{-g_{m1}R_SR_L(g_{m2}+C_Cs)}{\frac{R_S+r_\text{o2}}{r_\text{o2}}R_LC_LC_Cs^2+g_{m1}g_{m2}R_LR_SC_Cs+g_{m2}} \] The poles as

\[\begin{align} \omega_{p1} &\approx \frac{1}{g_{m1}R_LR_SC_c} \\ \omega_{p2} &\approx \frac{g_{m2}R_Sg_{m1}}{C_L}\frac{r_\text{o2}}{R_S+r_\text{o2}} \end{align}\]

and zero is not affected, which is \(\omega_z =\frac{g_{m2}}{C_C}\)

the above model simulation result is shown below

image-20240819221653262

the zero is located between two poles

take into the capacitance at the gate of \(M_1\) and all other second-order effect

image-20240819222727276

intuitive analysis of zero

miller compensation

  • zero in the right half plane \[ g_\text{m1}V_P = sC_c V_P \]

cascode compensation

  • zero in the left half plane \[ g_\text{m2}V_X = - sC_c V_X \]

zero_loc.drawio

How to Mitigate Impact of Zero

cascode_compensation

dominant pole \[ \omega_\text{p,d} = \frac {1} {R_\text{eq}g_\text{m9}R_{L}C_{c}} \] first nondominant pole \[ \omega_\text{p,nd} = \frac {g_\text{m4}R_\text{eq}g_\text{m9}} {C_L} \] zero \[ \omega_\text{z} = (g_\text{m4}R_\text{eq})(\frac {g_\text{m9}} {C_c}) \] a much greater magnitude than \(g_\text{m9}/C_C\)

Lectures

EE 240B: Advanced Analog Circuit Design, Prof. Bernhard E. Boser [OTA II, Multi-Stage]

Papers

B. K. Ahuja, "An improved frequency compensation technique for CMOS operational amplifiers," in IEEE Journal of Solid-State Circuits, vol. 18, no. 6, pp. 629-633, Dec. 1983, doi: 10.1109/JSSC.1983.1052012.

D. B. Ribner and M. A. Copeland, "Design techniques for cascoded CMOS op amps with improved PSRR and common-mode input range," in IEEE Journal of Solid-State Circuits, vol. 19, no. 6, pp. 919-925, Dec. 1984, doi: 10.1109/JSSC.1984.1052246.

Abo, Andrew & Gray, Paul. (1999). A 1.5V, 10-bit, 14MS/s CMOS Pipeline Analog-to-Digital Converter.

Book's chapters

Design of analog CMOS integrated circuits, Behzad Razavi

  • 10.5 Compensation of Two-Stage Op Amps
  • 10.7 Other Compensation Techniques

Analog Design Essentials, Willy M.C. Sansen

  • chapter #5 Stability of operational amplifiers - Compensation of positive zero

Analysis and Design of Analog Integrated Circuits 5th Edition, Paul R. Gray, Paul J. Hurst, Stephen H. Lewis, Robert G. Meyer

  • 9.4.3 Two-Stage MOS Amplifier Compensation

CMOS Analog Circuit Design 3rd Edition, Phillip E. Allen, Douglas R. Holberg

  • 6.2.2 Miller Compensation of the Two-Stage Op Amp

Ahuja variations

image-20250609210452373

reference

B. K. Ahuja, "An Improved Frequency Compensation Technique for CMOS Operational Amplifiers," IEEE 1. Solid-State Circuits, vol. 18, no. 6, pp. 629-633, Dec. 1983. [https://sci-hub.se/10.1109/JSSC.1983.1052012]

U. Dasgupta, "Issues in "Ahuja" frequency compensation technique", IEEE International Symposium on Radio-Frequency Integration Technology, 2009. [https://sci-hub.se/10.1109/RFIT.2009.5383679]

R. 1. Reay and G. T. A. Kovacs, "An unconditionally stable two-stage CMOS amplifier," IEEE 1. Solid-State Circuits, vol. 30, no. 5, pp. 591- 594, May 1995.

A. Garimella and P. M. Furth, "Frequency compensation techniques for op-amps and LDOs: A tutorial overview," 2011 IEEE 54th International Midwest Symposium on Circuits and Systems (MWSCAS), 2011, pp. 1-4, doi: 10.1109/MWSCAS.2011.6026315.

H. Aminzadeh, R. Lotfi and S. Rahimian, "Design Guidelines for Two-Stage Cascode-Compensated Operational Amplifiers," 2006 13th IEEE International Conference on Electronics, Circuits and Systems, 2006, pp. 264-267, doi: 10.1109/ICECS.2006.379776.

H. Aminzadeh and K. Mafinezhad, "On the power efficiency of cascode compensation over Miller compensation in two-stage operational amplifiers," Proceeding of the 13th international symposium on Low power electronics and design (ISLPED '08), Bangalore, India, 2008, pp. 283-288, doi: 10.1145/1393921.1393995.

Stabilizing a 2-Stage Amplifier URL:https://a2d2ic.wordpress.com/2016/11/10/stabilizing-a-2-stage-amplifier/

overview

image-20240721172721884

image-20240629140001275

\(\omega_d\) called damped natural frequency


closed loop frequency response

image-20240629134127219 \[\begin{align} A &= \frac{\frac{A_0}{(1+s/\omega_1)(1+s/\omega_2)}}{1+\beta \frac{A_0} {(1+s/\omega_1)(1+s/\omega_2)}} \\ &= \frac{A_0}{1+A_0 \beta}\frac{1}{\frac{s^2}{\omega_1\omega_2(1+A_0\beta)}+\frac{1/\omega_1+1/\omega_2}{1+A_0\beta}s+1} \\ &\simeq \frac{A_0}{1+A_0 \beta}\frac{1}{\frac{s^2}{\omega_u\omega_2}+\frac{1}{\omega_u}s+1} \\ &= \frac{A_0}{1+A_0 \beta}\frac{\omega_u\omega_2}{s^2+\omega_2s+\omega_u\omega_2} \end{align}\]

That is \(\omega_n = \sqrt{\omega_u\omega_2}\) and \(\zeta = \frac{1}{2}\sqrt{\frac{\omega_2}{\omega_u}}\)

where \(\omega_u\) is the unity gain bandwidth

image-20240629112429803

where \(f_r\) is resonant frequency, \(\zeta\) is damping ratio, \(P_f\) maximum peaking, \(P_t\) is the peak of the first overshoot (step response)

image-20240629142324982

damping factor & phase margin

  • phase margin is defined for open loop system

  • damping factor (\(\zeta\)) is defined for close loop system

The roughly 90 to 100 times of damping factor (\(\zeta\)​) is phase margin \[ \mathrm{PM} = 90\zeta \sim 100\zeta \] In order to have a good stable system, we want \(\zeta > 0.5\) or phase margin more than \(45^o\)

We can analyze open loop system in a better perspective because it is simpler. So, we always use the loop gain analysis to find the phase margin and see whether the system is stable or not.

additional Zero

\[\begin{align} TF &= \frac{s +\omega_z}{s^2+2\zeta \omega_ns+\omega_n^2} \\ &= \frac{\omega _z}{\omega _n^2}\cdot \frac{1+s/\omega _z}{1+s^2/\omega_n^2+2\zeta s/\omega_n} \end{align}\]

Let \(s=j\omega\) and omit factor, \[ A_\text{dB}(\omega) = 10\log[1+(\frac{\omega}{\omega _z})^2] - 10\log[1+\frac{\omega^4}{\omega_n^4}+\frac{2\omega^2(2\zeta ^2 -1)}{\omega_n^2}] \] peaking frequency \(\omega_\text{peak}\) can be obtained via \(\frac{d A_\text{dB}(\omega)}{d\omega} = 0\) \[ \omega_\text{peak} = \omega_z \sqrt{\sqrt{(\frac{\omega_n}{\omega_z})^4 - 2(\frac{\omega_n}{\omega_z})^2(2\zeta ^2-1)+1} - 1} \]

Settling Time

single-pole

image-20240725204501781

image-20240725204527121 \[ \tau \simeq \frac{1}{\beta \omega_\text{ugb}} \]

tau_1pole.drawio

two poles

Rise Time

Katsuhiko Ogata, Modern Control Engineering Fifth Edition

image-20240721180718116

For underdamped second order systems, the 0% to 100% rise time is normally used

For \(\text{PM}=70^o\)

  • \(\omega_2=3\omega_u\), that is \(\omega_n = 1.7\omega_u\).
  • \(\zeta = 0.87\)

Then \[ t_r = \frac{3.1}{\omega_u} \]

Settling Time

Gene F. Franklin, Feedback Control of Dynamic Systems, 8th Edition

image-20240721181956221

image-20240721182025547

As we know \[ \zeta \omega_n=\frac{1}{2}\sqrt{\frac{\omega_2}{\omega_u}}\cdot \sqrt{\omega_u\omega_2}=\frac{1}{2}\omega_2 \]

Then \[ t_s = \frac{9.2}{\omega_2} \]

For \(\text{PM}=70^o\), \(\omega_2 = 3\omega_u\), that is \[ t_s \simeq \frac{3}{\omega_u} \space\space \text{, for PM}=70^o \]

For \(\text{PM}=45^o\), \(\omega_2 = \omega_u\), that is \[ t_s \simeq \frac{9.2}{\omega_u} \space\space \text{, for PM}=45^o \]

Above equation is valid only for underdamped, \(\zeta=\frac{1}{2}\sqrt{\frac{\omega_2}{\omega_u}}\lt 1\), that is \(\omega_2\lt 4\omega_u\)

C. T. Chuang, "Analysis of the settling behavior of an operational amplifier," in IEEE Journal of Solid-State Circuits, vol. 17, no. 1, pp. 74-80, Feb. 1982 [https://sci-hub.se/10.1109/JSSC.1982.1051689]

2 Stage RC filter

high frequency pass

image-20240112002314153

Since \(1/sC_1+R_1 \gg R_0\) \[ \frac{V_m}{V_i}(s) \simeq \frac{R_0}{R_0 + 1/sC_0} = \frac{sR_0C_0}{1+sR_0C_0} \] step response of \(V_m\) \[ V_m(t) = e^{-t/R_0C_0} \] where \(\tau = R_0C_0\)

And \(V_o(s)\) can be expressed as \[\begin{align} \frac{V_o}{V_i}(s) & \simeq \frac{sR_0C_0}{1+sR_0C_0} \cdot \frac{sR_1C_1}{1+sR_1C_1} \\ &= \frac{sR_0C_0R_1C_1}{R_0C_0-R_1C_1}\left(\frac{1}{1+sR_1C_1} - \frac{1}{1+sR_0C_0}\right) \end{align}\]

Then step response of \(Vo\) \[\begin{align} Vo(t) &= \frac{R_0C_0R_1C_1}{R_0C_0-R_1C_1} \left(\frac{1}{R_1C_1}e^{-t/R_1C_1} - \frac{1}{R_0C_0}e^{-t/R_0C_0}\right) \\ &= \frac{1}{R_0C_0-R_1C_1}\left(R_0C_0e^{-t/R_1C_1} - R_1C_1e^{-t/R_0C_0}\right) \\ &\approx \frac{1}{R_0C_0-R_1C_1}\left(R_0C_0e^{-t/R_1C_1} - R_1C_1\right) \end{align}\]

where \(\tau=R_1C_1\)

Partial-fraction Expansion

1
2
3
4
5
6
7
8
9
10
syms C0
syms R0
syms C1
syms R1
syms s

Z0 = 1/s/C1 + R1;
Z1 = R0*Z0/(R0+Z0);
vm = Z1 / (Z1 + 1/s/C0);
vo = R1/Z0 * vm;
1
2
3
4
5
6
7
8
9
10
11
>> partfrac(vm, s)

ans =

1 - (s*(C1*R0 + C1*R1) + 1)/(C0*C1*R0*R1*s^2 + (C0*R0 + C1*R0 + C1*R1)*s + 1)

>> partfrac(vo, s)

ans =

1 - (s*(C0*R0 + C1*R0 + C1*R1) + 1)/(C0*C1*R0*R1*s^2 + (C0*R0 + C1*R0 + C1*R1)*s + 1)

\[\begin{align} V_m(s) &= 1 - \frac{s(C_1R_0+C_1R_1)+1}{C_0C_1R_0R_1s^2+(C_0R_0+C_1R_0+C_1R_1)s+1} \\ V_o(s) &= 1 - \frac{s(C_0R_0+C_1R_0+C_1R_1)+1}{C_0C_1R_0R_1s^2+(C_0R_0+C_1R_0+C_1R_1)s+1} \end{align}\]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
C0 = 200e-9;
R0 = 50;
C1 = 400e-15;
R1 = 200e3;

s = tf("s");

Z0 = 1/s/C1 + R1;
Z1 = R0*Z0/(R0+Z0);
vm = Z1 / (Z1 + 1/s/C0);
vo = R1/Z0 * vm;

vm_exp = 1 - (s*(C1*R0 + C1*R1) + 1)/(C0*C1*R0*R1*s^2 + (C0*R0 + C1*R0 + C1*R1)*s + 1);
vo_exp = 1 - (s*(C0*R0 + C1*R0 + C1*R1) + 1)/(C0*C1*R0*R1*s^2 + (C0*R0 + C1*R0 + C1*R1)*s + 1);

figure(1)
subplot(1,2,1)
step(vm, 500e-9, 'k-o');
hold on;
step(vm_exp, 500e-9, 'r-^')
title('vm step response')
grid on;
legend()


subplot(1,2,2)
step(vo, 500e-9, 'k-o');
hold on;
step(vo_exp, 500e-9, 'r-^')
title('vo step response')
grid on;
legend()


% with approximation
figure(2)
vm_exp2 = s*R0*C0/(1+s*R0*C0);
vo_exp2 = s*R0*C0/(1+s*R0*C0) * s*R1*C1/(1+s*R1*C1);

subplot(1,2,1)
step(vm, 500e-9, 'k-o');
hold on;
step(vm_exp2, 500e-9, 'r-^')
title('vm step response')
grid on;
legend()

subplot(1,2,2)
step(vo, 500e-9, 'k-o');
hold on;
step(vo_exp2, 500e-9, 'r-^')
title('vo step response')
grid on;
legend()

image-20240113181003272

image-20240113181032379


spectre simulation vs matlab

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
C0 = 200e-9;
R0 = 50;
C1 = 400e-15;
R1 = 200e3;

s = tf("s");

Z0 = 1/s/C1 + R1;
Z1 = R0*Z0/(R0+Z0);
vm = Z1 / (Z1 + 1/s/C0);
vo = R1/Z0 * vm;

step(vm, 500e-9);
hold on;
step(vo, 500e-9);

grid on
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
>> vm

vm =

1.024e-44 s^5 + 2.56e-37 s^4 + 1.6e-30 s^3
-----------------------------------------------------------
1.024e-44 s^5 + 2.571e-37 s^4 + 1.626e-30 s^3 + 1.6e-25 s^2

Continuous-time transfer function.

>> vo

vo =

8.194e-52 s^6 + 2.048e-44 s^5 + 1.28e-37 s^4
---------------------------------------------------------------------------
8.194e-52 s^6 + 3.081e-44 s^5 + 3.871e-37 s^4 + 1.638e-30 s^3 + 1.6e-25 s^2

Continuous-time transfer function.

image-20240112002155622

low frequency pass

image-20241218231322792

\[ \left\{ \begin{array}{cl} \frac{V_i - V_m}{R_0} &= C_0\frac{dV_m}{dt} + C_1\frac{dV_o}{dt} \\ \frac{V_m - V_o}{R_1} &= C_1\frac{dV_o}{dt} \\ V_m(t=0) &= 1 \\ V_o(t=0) &= 0 \end{array} \right. \]

Take into initial condition account \[ \frac{dV_m}{dt} \overset{\mathcal{L}}{\longrightarrow} sV_M-V_{m0} \] where \(V_{m0} = 1\)

\[\begin{align} V_O(s) &= \frac{V_I}{s^2R_0C_0R_1C_1+s(R_0C_0+R_1C_1+R_0C_1)+1} + \frac{V_{m0}R_0C_0}{s^2R_0C_0R_1C_1+s(R_0C_0+R_1C_1+R_0C_1)+1} \\ &\approx \frac{V_I}{s^2R_0C_0R_1C_1+sR_0(C_0+C_1)+1} + \frac{V_{m0}R_0C_0}{s^2R_0C_0R_1C_1+sR_0(C_0+C_1)+1} \\ &= \frac{V_I}{R_0(C_0+C_1)}\left(\frac{R_0(C_0+C_1)}{sR_0(C_0+C_1)+1} - \frac{R_1\frac{C_0C_1}{C_0+C_1}}{sR_1\frac{C_0C_1}{C_0+C_1}+1}\right) \\ &+ \frac{C_0}{C_0+C_1}\left(\frac{R_0(C_0+C_1)}{sR_0(C_0+C_1)+1} - \frac{R_1\frac{C_0C_1}{C_0+C_1}}{sR_1\frac{C_0C_1}{C_0+C_1}+1}\right) \end{align}\]

with \(V_I = \frac{1}{s}\), using inverse Laplace transform \[\begin{align} V_o(t) &= 1 - \frac{C_1}{C_0+C_1}e^{-t/\tau_0}-\frac{C_0}{C_0+C_1}e^{-t/\tau_1} - \frac{R_1C_0C_1}{R_0(C_0+C_1)^2} \tag{Eq.0} \\ &= 1 - \frac{C_1}{C_0+C_1}e^{-t/\tau_0}-\frac{C_0}{C_0+C_1}e^{-t/\tau_1} \tag{Eq.1} \end{align}\]

where

\[\begin{align} \tau_0 &= R_0(C_0+C_1) \\ \tau_1 &= R_1C_1\cdot \frac{C_0}{C_0+C_1} \end{align}\]

and \(\tau_0 \gg \tau_1\)


image-20241218230713148

image-20241218231130166

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
R0 = 100e3;
C0 = 200e-15;
R1 = 10e3;
C1 = 100e-15;

s = tf('s');

VI = 1/s;
Vm0 = 1;

deno = (s^2*R0*C0*R1*C1 + s*(R0*C0 + R1*C1 + R0*C1) + 1);
VO = VI/deno + (Vm0*R0*C0)/deno;
VM = (1+s*R1*C1)*VO;

t = linspace(0,40,1e3); % ns
[vot, ~] = impulse(VO, t*1e-9);
[vmt, ~] = impulse(VM, t*1e-9);

tau0 = R0*(C0+C1)*1e9; %ns
tau1 = R1*C0*C1/(C0+C1)*1e9; %ns
y0 = 1 - C1/(C0+C1)*exp(-t/tau0) - C0/(C0+C1)*exp(-t/tau1) - R1*C0*C1/R0/(C0+C1)^2; % Eq.0
y1 = 1 - C1/(C0+C1)*exp(-t/tau0) - C0/(C0+C1)*exp(-t/tau1); % Eq.1

plot(t, vot, t, vmt,LineWidth=2);
hold on
plot(t,y0, t,y1, LineWidth=2, LineStyle="--" );
xlim([-10, 40])
legend('V_o(t)', 'V_m(t)','y_0(t)', 'y_1(t)', fontsize=12)
xlabel('t (ns)')
ylabel('mag (V)')
grid on;

reference

Gene F. Franklin, J. David Powell, and Abbas Emami-Naeini. 2018. Feedback Control of Dynamic Systems (8th Edition) (8th. ed.). Pearson.

Katsuhiko Ogata, Modern Control Engineering, 5th edition

Conversion Relationships

Capacitor

image-20231224163730529


image-20240119000951498

image-20240119001025892

image-20240119001309410

Inductor

image-20231224163740411

Derivation

image-20231224163905233

image-20231224163916109

series or parallel representation

reference

Tank Circuits/Impedances [https://stanford.edu/class/ee133/handouts/lecturenotes/lecture5_tank.pdf]

Resonant Circuits [https://web.ece.ucsb.edu/~long/ece145b/Resonators.pdf]

Series & Parallel Impedance Parameters and Equivalent Circuits [https://assets.testequity.com/te1/Documents/pdf/series-parallel-impedance-parameters-an.pdf]

ES Lecture 35: Non ideal capacitor, Capacitor Q and series RC to parallel RC conversion [https://youtu.be/CJ_2U5pEB4o?si=4j4CWsLSapeu-hBo]

Are AC-Driven Circuits Linear?

\[ f(x_1 + x_2)= f(x_1)+ f(x_2) \]

Often, AC-driven circuits can be mistaken as non-linear as the basis that determines the linearity of a circuit is the relationship between the voltage and current.

While an AC signal varies with time, it still exhibits a linear relationship across elements like resistors, capacitors, and inductors. Therefore, AC driven circuits are linear.

Phasor

Phasor concept has no real physical significance. It is just a convenient mathematical tool.

Phasor analysis determines the steady-state response to a linear circuit driven by sinusoidal sources with frequency \(f\)

If your circuit includes transistors or other nonlinear components, all is not lost. There is an extension of phasor analysis to nonlinear circuits called small-signal analysis in which you linearize the components before performing phasor analysis - AC analyses of SPICE

A sinusoid is characterized by 3 numbers, its amplitude, its phase, and its frequency. For example \[ v(t) = A\cos(\omega t + \phi) \tag{1} \] In a circuit there will be many signals but in the case of phasor analysis they will all have the same frequency. For this reason, the signals are characterized using only their amplitude and phase.

The combination of an amplitude and phase to describe a signal is the phasor for that signal.

Thus, the phasor for the signal in \((1)\) is \(A\angle \phi\)

In general, phasors are functions of frequency

Often it is preferable to represent a phasor using complex numbers rather than using amplitude and phase. In this case we represent the signal as: \[ v(t) = \Re\{Ve^{j\omega t} \} \tag{2} \] where \(V=Ae^{j\phi}\) is the phasor.

\((1)\) and \((2)\) are the same

Phasor Model of a Resistor

A linear resistor is defined by the equation \(v = Ri\)

Now, assume that the resistor current is described with the phasor \(I\). Then \[ i(t) = \Re\{Ie^{j\omega t}\} \] \(R\) is a real constant, and so the voltage can be computed to be \[ v(t) = R\Re\{Ie^{j\omega t}\} = \Re\{RIe^{j\omega t}\} = \Re\{Ve^{j\omega t}\} \] where \(V\) is the phasor representation for \(v\), i.e. \[ V = RI \]

  1. Thus, given the phasor for the current we can directly compute the phasor for the voltage across the resistor.

  2. Similarly, given the phasor for the voltage across a resistor we can compute the phasor for the current through the resistor using \(I = \frac{V}{R}\)

Phasor Model of a Capacitor

A linear capacitor is defined by the equation \(i=C\frac{dv}{dt}\)

Now, assume that the voltage across the capacitor is described with the phasor \(V\). Then \[ v(t) = \Re\{ V e^{j\omega t}\} \] \(C\) is a real constant \[ i(t) = C\Re\{\frac{d}{dt}V e^{j\omega t}\} = \Re\{j\omega C V e^{j\omega t}\} \] The phasor representation for \(i\) is \(i(t) = \Re\{Ie^{j\omega t}\}\), that is \(I = j\omega C V\)

  1. Thus, given the phasor for the voltage across a capacitor we can directly compute the phasor for the current through the capacitor.

  2. Similarly, given the phasor for the current through a capacitor we can compute the phasor for the voltage across the capacitor using \(V=\frac{I}{j\omega C}\)

Phasor Model of an Inductor

A linear inductor is defined by the equation \(v=L\frac{di}{dt}\)

Now, assume that the inductor current is described with the phasor \(I\). Then \[ i(t) = \Re\{ I e^{j\omega t}\} \] \(L\) is a real constant, and so the voltage can be computed to be \[ v(t) = L\Re\{\frac{d}{dt}I e^{j\omega t}\} = \Re\{j\omega L I e^{j\omega t}\} \] The phasor representation for \(v\) is \(v(t) = \Re\{Ve^{j\omega t}\}\), that is \(V = j\omega L I\)

  1. Thus, given the phasor for the current we can directly compute the phasor for the voltage across the inductor.

  2. Similarly, given the phasor for the voltage across an inductor we can compute the phasor for the current through the inductor using \(I=\frac{V}{j\omega L}\)

Impedance and Admittance

Impedance and admittance are generalizations of resistance and conductance.

They differ from resistance and conductance in that they are complex and they vary with frequency.

Impedance is defined to be the ratio of the phasor for the voltage across the component and the current through the component: \[ Z = \frac{V}{I} \]

Impedance is a complex value. The real part of the impedance is referred to as the resistance and the imaginary part is referred to as the reactance

For a linear component, admittance is defined to be the ratio of the phasor for the current through the component and the voltage across the component: \[ Y = \frac{I}{V} \]

Admittance is a complex value. The real part of the admittance is referred to as the conductance and the imaginary part is referred to as the susceptance.

Response to Complex Exponentials

The response of an LTI system to a complex exponential input is the same complex exponential with only a change in amplitude

\[\begin{align} y(t) &= H(s)e^{st} \\ H(s) &= \int_{-\infty}^{+\infty}h(\tau)e^{-s\tau}d\tau \end{align}\]

where \(h(t)\) is the impulse response of a continuous-time LTI system

convolution integral is used here

\[\begin{align} y[n] &= H(z)z^n \\ H(z) &= \sum_{k=-\infty}^{+\infty}h[k]z^{-k} \end{align}\]

where \(h(n)\) is the impulse response of a discrete-time LTI system

convolution sum is used here

The signals of the form \(e^{st}\) in continuous time and \(z^{n}\) in discrete time, where \(s\) and \(z\) are complex numbers are referred to as an eigenfunction of the system, and the amplitude factor \(H(s)\), \(H(z)\) is referred to as the system's eigenvalue

Laplace transform

One of the important applications of the Laplace transform is in the analysis and characterization of LTI systems, which stems directly from the convolution property \[ Y(s) = H(s)X(s) \] where \(X(s)\), \(Y(s)\), and \(H(s)\) are the Laplace transforms of the input, output, and impulse response of the system, respectively

From the response of LTI systems to complex exponentials, if the input to an LTI system is \(x(t) = e^{st}\), with \(s\) the ROC of \(H(s)\), then the output will be \(y(t)=H(s)e^{st}\); i.e., \(e^{st}\) is an eigenfunction of the system with eigenvalue equal to the Laplace transform of the impulse response.

s-Domain Element Models

image-20231223225541693

image-20231223225609893

Sinusoidal Steady-State Analysis

Here Sinusoidal means that source excitations have the form \(V_s\cos(\omega t +\theta)\) or \(V_s\sin(\omega t+\theta)\)

Steady state mean that all transient behavior of the stable circuit has died out, i.e., decayed to zero

image-20231223212820547

image-20231223212846596

image-20231223213016508

\(s\)-domain and phasor-domain

Phasor analysis is a technique to find the steady-state response when the system input is a sinusoid. That is, phasor analysis is sinusoidal analysis.

  • Phasor analysis is a powerful technique with which to find the steady-state portion of the complete response.
  • Phasor analysis does not find the transient response.
  • Phasor analysis does not find the complete response.

The beauty of the phasor-domain circuit is that it is described by algebraic KVL and KCL equations with time-invariant sources, not differential equations of time

image-20231224001422189

image-20231223230739219

The difference here is that Laplace analysis can also give us the transient response

image-20231224132406755

General Response Classifications

img

  • zero-input response, zero-state response & complete response

    image-20231223235252850

    The zero-state response is given by \(\mathscr{L^1}[H(s)F(s)]\), for the arbitrary \(s\)-domain input \(F(s)\)

    where \(Z_L(s) = sL\), the inductor with zero initial current \(i_L(0)=0\) and \(Z_C(s)=1/sC\) with zero initial voltage \(v_C(0)=0\)

  • transient response & steady-state response

    image-20231224000454014

  • natural response & forced response

    image-20231224000817438


image-20240118212304219

Transfer Functions and Frequency Response

transfer function

The transfer function \(H(s)\) is the ratio of the Laplace transform of the output of the system to its input assuming all zero initial conditions.

image-20240106185523937

image-20240106185937270

frequency response

An immediate consequence of convolution is that an input of the form \(e^{st}\) results in an output \[ y(t) = H(s)e^{st} \] where the specific constant \(s\) may be complex, expressed as \(s = \sigma + j\omega\)

A very common way to use the exponential response of LTIs is in finding the frequency response i.e. response to a sinusoid

First, we express the sinusoid as a sum of two exponential expressions (Euler’s relation): \[ \cos(\omega t) = \frac{1}{2}(e^{j\omega t}+e^{-j\omega t}) \] If we let \(s=j\omega\), then \(H(-j\omega)=H^*(j\omega)\), in polar form \(H(j\omega)=Me^{j\phi}\) and \(H(-j\omega)=Me^{-j\phi}\). \[\begin{align} y_+(t) & = H(s)e^{st}|_{s=j\omega} = H(j\omega)e^{j\omega t} = M e^{j(\omega t + \phi)} \\ y_-(t) & = H(s)e^{st}|_{s=-j\omega} = H(-j\omega)e^{-j\omega t} = M e^{-j(\omega t + \phi)} \end{align}\]

By superposition, the response to the sum of these two exponentials, which make up the cosine signal, is the sum of the responses \[\begin{align} y(t) &= \frac{1}{2}[H(j\omega)e^{j\omega t} + H(-j\omega)e^{-j\omega t}] \\ &= \frac{M}{2}[e^{j(\omega t + \phi)} + e^{-j(\omega t + \phi)}] \\ &= M\cos(\omega t + \phi) \end{align}\]

where \(M = |H(j\omega|\) and \(\phi = \angle H(j\omega)\)

This means if a system represented by the transfer function \(H(s)\) has a sinusoidal input, the output will be sinusoidal at the same frequency with magnitude \(M\) and will be shifted in phase by the angle \(\phi\)

Laplace transform & Fourier transform

  • Laplace transforms such as \(Y(s)=H(s)U(s)\) can be used to study the complete response characteristics of systems, including the transient response—that is, the time response to an initial condition or suddenly applied signal
  • This is in contrast to the use of Fourier transforms, which only take into account the steady-state response

Given a general linear system with transfer function \(H(s)\) and an input signal \(u(t)\), the procedure for determining \(y(t)\) using the Laplace transform is given by the following steps:

image-20240106224403401

reference

Ken Kundert. Introduction to Phasors. Designer’s Guide Community. September 2011.

How to Perform Linearity Circuit Analysis [https://resources.pcb.cadence.com/blog/2021-how-to-perform-linearity-circuit-analysis]

Stephen P. Boyd. EE102 Lecture 7 Circuit analysis via Laplace transform [https://web.stanford.edu/~boyd/ee102/laplace_ckts.pdf]

Cheng-Kok Koh, EE695K VLSI Interconnect, S-Domain Analysis [https://engineering.purdue.edu/~chengkok/ee695K/lec3c.pdf]

Kenneth R. Demarest, Circuit Analysis using Phasors, Laplace Transforms, and Network Functions [https://people.eecs.ku.edu/~demarest/212/Phasor%20and%20Laplace%20review.pdf]

DeCarlo, R. A., & Lin, P.-M. (2009). Linear circuit analysis : time domain, phasor, and Laplace transform approaches (3rd ed).

Davis, Artice M.. "Linear Circuit Analysis." The Electrical Engineering Handbook - Six Volume Set (1998)

Duane Marcy, Fundamentals of Linear Systems [http://lcs-vc-marcy.syr.edu:8080/Chapter22.html]

Gene F. Franklin, J. David Powell, and Abbas Emami-Naeini. 2018. Feedback Control of Dynamic Systems (8th Edition) (8th. ed.). Pearson.

Data Register, DR:

  • Bypass Register, BR
  • Boundary Scan Register, BSR

Instruction Register, IR

TAP Controller

image-20240113191225838

  • FSM and Shift Register of DR and IR works at the posedge of the clock
  • TMS, TDI, TDO and Hold Register of DR and IR changes value at the negedge of the clock

image-20240113191409296

image-20240113191526490

capture IR 01, the fixed is for easier fault detection

image-20231129232443249

image-20231129233218011

After power-up, they may not be in sync, but there is a trick. Look at the state machine and notice that no matter what state you are, if TMS stays at "1" for five clocks, a TAP controller goes back to the state "Test-Logic Reset". That's used to synchronize the TAP controllers.

It is important to note that in a typical Boundary-Scan test, the time between launching a signal from driver (at the falling edge of test clock (TCK) in the Update-DR or Update-IR TAP Controller state) and capturing that signal (at the rising edge of TCK in the Caputre-DR TAP Controller state) is no less tha 2.5 TCK cycles

Further, the time between successive launches on a driver is governed - not only by the TCk rate - but by the amount of serial data shifting needed to load the next pattern data in the concatenated Boundary-Scan Registers of the Boundary-Scan chain

Thus the effective test data rate of a driver could be thousands of the times lower than the TCK rate

  1. For DC-coupled interconnect, this time is of no concern
  2. For AC-coupled interconnect, the signal may easily decay partially or completely before it can be captured
  3. If only partial decay occurs before capture, that decay will very likely be completed before the driver produces the next edge

AC-coupling

In general, AC-coupling can distort a signal transmitted across a channel depending on its frequency.

Figure 5

  • The high frequency signal is relatively unaffected by the coupling
  • The low frequency signal is severely impacted
    1. it decays to \(V_T\) after a few time constants
    2. its amplitude is double the input amplitude > transient response, before AC-coupling capacitor: \(-A_p \to A_p\); after AC-coupling capacitor \(V_T \to V_T+2A_p\) > A key item to note is that the transitions in the original signal are preserved, although their start and end points are offset > > compared to where they were in the high frequency

Test signal implementation

The test data is either the content of the Boundary-Scan Register Update latch (U) when executing the (DC) EXTEST instruction, or an "AC Signal" when an AC testing instruction is loaded into the device.

The AC signal is a test waveform suited for transmission through AC-coupling

image-20240113184502597

Test signal reception

  • When an AC testing instruction is loaded, a specialized test receiver detects transitoins of the AC signal seen at the input and determines if this represents a logic '0' or '1'
  • When EXTEST is loaded, the input signal level is detected and sent to the output of the test receiver to the Boundary-Scan Register cell

When testing for a shorted capacitor, the test software must ensure that enough time has passed for the signal to decay before entering Capture-DR, either by stopping TCk or by spending additional TCK cycles in the Run-Test/Idle TAP Controller state

EXTEST_PULSE & EXTEST_TRAIN

The two new AC-test instructions provided by this standard differ primarily in the number and timing of transitions to provide flexibility in dealing with the specific dynamic behavior of the channels being tested

AC Test Signal essentially modulates test data so that it will propagate through AC-coupled channels, for devices that contatin AC pins

Tools should use the EXTEST_PULSE instruction unless there is a specific requirement for the EXTEST_TRAIN instruction

EXTEST_PULSE

Generate two additional driver transitions and allows a tester to vary the time between them dependent on how many TCK cycles the TAP is left in the Run-Test/Idle TAP Controller state.

This is intended to allow any undesired transient condition to decay to a DC steady-state value when that will make the final transition more reliably detectable

The duration in the Run-Test/Idle TAP Controller state should be at least three times the high-pass coupling time constant. This allows the first additional transition to decay away to the DC steady-state value for the channel, and ensures that the full amplitude of the final transition is added to or subtracted from that steady-state value

This establishes a known initial condition for the final transition and permits reliable specification of the detection threshold of the test receiver

image-20240113190314947

EXTEST_TRAIN

Generate multiple additional transitions, the number dependent on how long the TAP is left in the Run-Test/Idle TAP Controller state

This is intended to allow any undesired transient condition to decay to an AC steady-state value when that will make the final transition more reliably detectable

image-20240113190345323

IEEE Std 1149.6-2003

This standard is built on top of IEEE Std 1149.1 using the same Test Access Port structure and Boundary-Scan architecture.

  • It adds the concept of a "test receiver" to input pins that are expected to handle differential and/or AC-coupling
  • It adds two new instructions that cause drivers to emit AC waveforms that are processed by test receivers.

JTAG Instruction

Implementation

  • AC mode hysteresis, detect transistion
  • DC mode threshold is determined by jtag initial value

reference

IEEE Std 1149.1-2001, IEEE Standard Test Access Port and Boundary-Scan Architecture, IEEE, 2001

IEEE Std 1149.6-2003, IEEE Standard for BoundaryScan Testing of Advanced Digital Networks, IEEE, 2003

IEEE 1149.6 Tutorial | Testing AC-coupled and Differential High-speed Nets [https://www.asset-intertech.com/resources/eresources/ieee-11496-tutorial-testing-ac-coupled-and-differential-high-speed-nets/]

Prof. James Chien-Mo Li, Lab of Dependable Systems, National Taiwan University. VLSI Testing [http://cc.ee.ntu.edu.tw/~cmli/VLSItesting/]

K.P. Parker, The Boundary Scan Handbook, 3rd ed., Kluwer Academic, 2003.

B. Eklow, K. P. Parker and C. F. Barnhart, "IEEE 1149.6: a boundary-scan standard for advanced digital networks," in IEEE Design & Test of Computers, vol. 20, no. 5, pp. 76-83, Sept.-Oct. 2003, doi: 10.1109/MDT.2003.1232259.

Effective Switching resistance

image-20231114001209252

https://www.eecis.udel.edu/~vsaxena/courses/ece445/s19/Lecture%20Notes/lec15_ece445.pdf

wire delay

The Elmore Delay

image-20230624234813719

image-20230624234940864

image-20230625001756173

Basic idea: use of mean of \(v'(t)\) to approximate median of \(v'(t)\)

image-20230624235148246

image-20230625002239199

Elmore delay approximates the median of \(h(t)\) by the mean of \(h(t)\)

Distributed RC-Line

image-20230624224005736

Lumped approximations

\(rc\)-models

If your simulator does not support a distributed \(rc\)-model, or if the computational complexity of these models slows down your simulation too much, you can construct a simple yet accurate model yourself by approximating the distributed \(rc\) by a lumped RC network with a limited number of elements

image-20230624230057265

The accuracy of the model is determined by the number of stages. For instance, the error of the \(\Pi -3\) model is less than 3%, which is generally sufficient.

Why use "\(\Pi\) Model"

image-20230624230800255

examples

image-20230624224643487

image-20230624224923241

Wire Inductive Effect

  • RC delay increases quadratically with length
  • LC delay (speed of light flight time) increases linearly with length

Inductance will only be important to the delay of low-resistance signals such as wide clock lines

wave

Signal propagates over the wire as a wave (rather than diffusing as in \(rc\) only models)

Signal propagates by alternately transferring energy from capacitive to inductive modes

reference

Akio Kitagawa, Analog layout design https://mixsignal.files.wordpress.com/2013/03/analog-layout.pdf

THE WIRE http://bwrcs.eecs.berkeley.edu/Classes/icdesign/ee141_f01/Notes/chapter4.pdf

Anoop Veliyath, Design Engineer, Cadence Design Systems. Accurately Modeling Transmission Line Behavior with an LC Network-based Approach [pdf]

Mark Horowitz. Lecture 2: Wires and Wire Models [pdf]

Neil Weste and David Harris. 2010. CMOS VLSI Design: A Circuits and Systems Perspective (4th. ed.). Addison-Wesley Publishing Company, USA.

Cheng-Kok Koh. EE695K Modeling and Optimization of High Performance Interconnect [lec3a_pdf]

Vishal Saxena. ECE 445 Intro to VLSI Design: Lectures for Spring 2019 https://www.eecis.udel.edu/~vsaxena/courses/ece445/s19/ECE445.htm

image-20241106231114717

Ensemble average

[https://ece-research.unm.edu/bsanthan/ece541/stat.pdf]

[https://www.nii.ac.jp/qis/first-quantum/e/forStudents/lecture/pdf/noise/chapter1.pdf]

  • Time average: time-averaged quantities for the \(i\)-th member of the ensemble
  • Ensemble average: ensemble-averaged quantities for all members of the ensemble at a certain time

image-20241116113758119

image-20241116215119239

image-20241116215140298

where \(\theta\) is one member of the ensemble; \(p(x)dx\) is the probability that \(x\) is found among \([x, x + dx]\)

autocorrelation, Stationarity & Ergodicity

autocorrelation

image-20241116112504606

The expectation returns the probability-weighted average of the specific function at that specific time over all possible realizations of the process

Stationarity

[https://ece-research.unm.edu/bsanthan/ece541/station.pdf]

image-20241123221623537

image-20240720140527704

Ergodicity

ensemble autocorrelation and temporal autocorrelation (time autocorrelation)

image-20240719230346944

image-20240719210621021


image-20241123004051314

LTI Filtering of WSS process

mean

image-20240917104857284

image-20240917104916086


image-20240827221945277

autocorrelation

deterministic autocorrelation function

image-20240427170024123

\[ R_{yy}(\tau) = h(\tau)*R_{xx}(\tau)*h(-\tau) =R_{xx}(\tau)*h(\tau)*h(-\tau) \]

image-20240907211343832

why \(\overline{R}_{hh}(\tau) \overset{\Delta}{=} h(\tau)*h(-\tau)\) is autocorrelation ? the proof is as follows:

\[\begin{align} \overline{R}_{hh}(\tau) &= h(\tau)*h(-\tau) \\ &= \int_{-\infty}^{\infty}h(x)h(-(\tau - x))dx \\ &= \int_{-\infty}^{\infty}h(x)h(-\tau + x))dx \\ &=\int_{-\infty}^{\infty}h(x+\tau)h(x))dx \end{align}\]


PSD

image-20240827222224395

image-20240827222235906

Topic 6 Random Processes and Signals [https://www.robots.ox.ac.uk/~dwm/Courses/2TF_2021/N6.pdf]

Alan V. Oppenheim, Introduction To Communication, Control, And Signal Processing [https://ocw.mit.edu/courses/6-011-introduction-to-communication-control-and-signal-processing-spring-2010/a6bddaee5966f6e73450e6fe79ab0566_MIT6_011S10_notes.pdf]

Balu Santhanam, Probability Theory & Stochastic Process 2020: LTI Systems and Random Signals [https://ece-research.unm.edu/bsanthan/ece541/LTI.pdf]


Time Reversal \[ x(-t) \overset{FT}{\longrightarrow} X(-j\omega) \]

if \(x(t)\) is real, then \(X(j\omega)\)​ has conjugate symmetry \[ X(-j\omega) = X^*(j\omega) \]

Derivatives of Random Processes

since \(x(t)\) is stationary process, and \(y(t) = \frac{dx(t)}{dt}\)

Using \(R_{yy}(\tau) = h(\tau)*R_{xx}(\tau)*h(-\tau)\)

\[\begin{align} R_{yy}(\tau) &= \mathcal{F}^{-1}[H(j\omega)\Phi_{xx}(j\omega)H(-j\omega)] \\ &= \mathcal{F}^{-1}[-(j\omega)^2\Phi_{xx}(j\omega)] \end{align}\]

we obtain the autocorrelation function of the output process as \[ R_{yy}(\tau) = -\frac{d^2}{d\tau^2}R_{xx}(\tau) \]

Liu Congfeng, Xidian University. Random Signal Processing: Chapter 5 Linear System: Random Process [https://web.xidian.edu.cn/cfliu/files/20121125_153218.pdf]

[https://sharif.ir/~bahram/sp4cl/PapoulisLectureSlides/lectr14.pdf]

Periodogram

The periodogram is in fact the Fourier transform of the aperiodic correlation of the windowed data sequence

image-20240907215822425

image-20240907215957865

image-20240907230715637

estimating continuous-time stationary random signal

periodogram.drawio

The sequence \(x[n]\) is typically multiplied by a finite-duration window \(w[n]\), since the input to the DFT must be of finite duration. This produces the finite-length sequence \(v[n] = w[n]x[n]\)

image-20240910005608007

image-20240910005927534

image-20240910005723458

\[\begin{align} \hat{P}_{ss}(\Omega) &= \frac{|V(e^{j\omega})|^2}{LU} \\ &= \frac{|V(e^{j\omega})|^2}{\sum_{n=0}^{L-1}(w[n])^2} \tag{1}\\ &= \frac{L|V(e^{j\omega})|^2}{\sum_{k=0}^{L-1}(W[k])^2} \tag{2} \end{align}\]

image-20240910010638376

That is, by \((1)\) \[ \hat{P}_{ss}(\Omega) = T_s\hat{P}_{xx(\omega)} = \frac{T_s|V(e^{j\omega})|^2}{\sum_{n=0}^{L-1}(w[n])^2}=\frac{|V(e^{j\omega})|^2}{f_{res}L\sum_{n=0}^{L-1}(w[n])^2} \]

That is, by \((2)\) \[ \hat{P}_{ss}(\Omega) = T_s\hat{P}_{xx(\omega)} = \frac{T_sL|V(e^{j\omega})|^2}{\sum_{k=0}^{L-1}(W[k])^2} = \frac{|V(e^{j\omega})|^2}{f_{res}\sum_{k=0}^{L-1}(W[k])^2} \]

!! ENBW

Wiener-Khinchin theorem

Norbert Wiener proved this theorem for the case of a deterministic function in 1930; Aleksandr Khinchin later formulated an analogous result for stationary stochastic processes and published that probabilistic analogue in 1934. Albert Einstein explained, without proofs, the idea in a brief two-page memo in 1914

\(x(t)\), Fourier transform over a limited period of time \([-T/2, +T/2]\) , \(X_T(f) = \int_{-T/2}^{T/2}x(t)e^{-j2\pi ft}dt\)

With Parseval's theorem \[ \int_{-T/2}^{T/2}|x(t)|^2dt = \int_{-\infty}^{\infty}|X_T(f)|^2df \] So that \[ \frac{1}{T}\int_{-T/2}^{T/2}|x(t)|^2dt = \int_{-\infty}^{\infty}\frac{1}{T}|X_T(f)|^2df \]

where the quantity, \(\frac{1}{T}|X_T(f)|^2\) can be interpreted as distribution of power in the frequency domain

For each \(f\) this quantity is a random variable, since it is a function of the random process \(x(t)\)

The power spectral density (PSD) \(S_x(f )\) is defined as the limit of the expectation of the expression above, for large \(T\): \[ S_x(f) = \lim _{T\to \infty}\mathrm{E}\left[ \frac{1}{T}|X_T(f)|^2 \right] \]

The Wiener-Khinchin theorem ensures that for well-behaved wide-sense stationary processes the limit exists and is equal to the Fourier transform of the autocorrelation \[\begin{align} S_x(f) &= \int_{-\infty}^{+\infty}R_x(\tau)e^{-j2\pi f \tau}d\tau \\ R_x(\tau) &= \int_{-\infty}^{+\infty}S_x(f)e^{j2\pi f \tau}df \end{align}\]

Note: \(S_x(f)\) in Hz and inverse Fourier Transform in Hz (\(\frac{1}{2\pi}d\omega = df\))

image-20240910003805151

[https://www.robots.ox.ac.uk/~dwm/Courses/2TF_2011/2TF-L5.pdf]


Example

image-20240904203802604

Remember: impulse scaling

image-20240718210137344 \[ \cos(2\pi f_0t) \overset{\mathcal{F}}{\longrightarrow} \frac{1}{2}[\delta(f -f_0)+\delta(f+f_0)] \]

Energy Signal

image-20240910004411501

image-20240910004421791

image-20240910004448439

Wiener Process (Brownian Motion)

Dennis Sun, Introduction to Probability: Lesson 49 Brownian Motion [https://dlsun.github.io/probability/brownian-motion.html]

Wiener process (also called Brownian motion)

unnamed-chunk-178-1

image-20241202001449997

NRZ PSD

image-20231111100420675

image-20231111101322771

image-20231110224237933

Lecture 26 Autocorrelation Functions of Random Binary Processes [https://bpb-us-e1.wpmucdn.com/sites.gatech.edu/dist/a/578/files/2003/12/ECE3075A-26.pdf]

Lecture 32 Correlation Functions & Power Density Spectrum, Cross-spectral Density [https://bpb-us-e1.wpmucdn.com/sites.gatech.edu/dist/a/578/files/2003/12/ECE3075A-32.pdf]


Normal Distribution and Input-Referred Noise [https://a2d2ic.wordpress.com/2013/06/05/normal-distribution-and-input-referred-noise/]

Fig.1 PDF for sum of a large number of random variables

Fig.2 The noise signal, its auto correlation function, and spectral density [2]

reference

L.W. Couch, Digital and Analog Communication Systems, 8th Edition, 2013

Alan V Oppenheim, George C. Verghese, Signals, Systems and Inference, 1st edition

R. Ziemer and W. Tranter, Principles of Communications, Seventh Edition, 2013

PSS and HB Overview

The steady-state response is the response that results after any transient effects have dissipated.

The large signal solution is the starting point for small-signal analyses, including periodic AC, periodic transfer function, periodic noise, periodic stability, and periodic scattering parameter analyses.

Designers refer periodic steady state analysis in time domain as "PSS" and corresponding frequency domain notation as "HB"

image-20231028163033255

Harmonic Balance Analysis

The idea of harmonic balance is to find a set of port voltage waveforms (or, alternatively, the harmonic voltage components) that give the same currents in both the linear-network equations and nonlinear-network equations

that is, the currents satisfy Kirchoff's current law

Define an error function at each harmonic, \(f_k\), where \[ f_k = I_{\text{LIN}}(k\omega) + I_{\text{NL}}(k\omega) \] where \(k=0, 1, 2,...,K\)

Note that each \(f_k\) is implicitly a function of all voltage components \(V(k\omega)\)

Newton Solution of the Harmonic-Balance Equation

Iterative Process and Jacobian Formulation

image-20231108222451147

The elements of the Jacobian are the derivatives \[ \frac{\partial F_{\text{n,k}}}{\partial _{V_\text{m,l}}} \] where \(n\) and \(m\) are the port indices \((1,N)\), and \(k\) and \(l\) are the harmonic indices \((0,...,K)\)

Selecting the Number of Harmonics and Time Samples

In theory, the waveforms generated in nonlinear analysis have an infinite number of harmonics, so a complete description of the operation of a nonlinear circuit would appear to require current and voltage vectors of infinite dimension.

Fortunately, the magnitudes of frequency components invariably decrease with frequency; otherwise the time wavefroms would represent infinite power.

Initial Estimate

One important property of Newton's method is that its speed and reliability of convergence depend strongly upon the initial estimate of the solution vector.

Shooting Newton

TODO 📅

Nonlinearity & Linear Time-Varying Nature

Nonlinearity Nature

The nonlinearity causes the signal to be replicated at multiples of the carrier, an effect referred to as harmonic distortion, and adds a skirt to the signal that increases its bandwidth, an effect referred to as intermodulation distortion

image-20231029093504162

It is possible to eliminate the effect of harmonic distortion with a bandpass filter, however the frequency of the intermodulation distortion products overlaps the frequency of the desired signal, and so cannot be completely removed with filtering.

Time-Varying Linear Nature

image-20231029101042671

linear with respect to \(v_{in}\) and time-varying

Given \(v_{in}(t)=m(t)\cos (\omega_c t)\) and LO signal of \(\cos(\omega_{LO} t)\), then \[ v_{out}(t) = \text{LPF}\{m(t)\cos(\omega_c t)\cdot \cos(\omega_{LO} t)\} \] and \[ v_{out}(t) = m(t)\cos((\omega_c - \omega_{LO})t) \]

A linear periodically-varying transfer function implements frequency translation

Periodic small signal analyses

Analysis in Simulator

  1. LPV analyses start by performing a periodic analysis to compute the periodic operating point with only the large clock signal applied (the LO, the clock, the carrier, etc.).
  2. The circuit is then linearized about this time-varying operating point (expand about the periodic equilibrium point with a Taylor series and discard all but the first-order term)
  3. and the small information signal is applied. The response is calculated using linear time-varying analysis

Versions of this type of small-signal analysis exists for both harmonic balance and shooting methods

PAC is useful for predicting the output sidebands produced by a particular input signal

PXF is best at predicting the input images for a particular output

image-20241109110936909

image-20241109110958641

Conversion Matrix Analysis

Large-signal/small-signal analysis, or conversion matrix analysis, is useful for a large class of problems wherein a nonlinear device is driven, or "pumped" by a single large sinusoidal signal; another signal, much smaller, is applied; and we seek only the linear response to the small signal.

The most common application of this technique is in the design of mixers and in nonlinear noise analysis

  1. First, analyzing the nonlinear device under large-signal excitation only, where the harmonic-balance method can be applied
  2. Then, the nonlinear elements in the device's equivalent circuit are then linearized to create small-signal, linear, time-varying elements
  3. Finally, a small-signal analysis is performed

Element Linearized

Below shows a nonlinear resistive element, which has the \(I/V\) relationship \(I=f(V)\). It is driven by a large-signal voltage

image-20220511203515431

Assuming that \(V\) consists of the sum of a large-signal component \(V_0\) and a small-signal component \(v\), with Taylor series \[ f(V_0+v) = f(V_0)+\frac{d}{dV}f(V)|_{V=V_0}\cdot v+\frac{1}{2}\frac{d^2}{dV^2}f(V)|_{V=V_0}\cdot v^2+... \] The small-signal, incremental current is found by subtracting the large-signal component of the current \[ i(v)=I(V_0+v)-I(V_0) \] If \(v \ll V_0\), \(v^2\), \(v^3\),... are negligible. Then, \[ i(v) = \frac{d}{dV}f(V)|_{V=V_0}\cdot v \]

\(V_0\) need not be a DC quantity; it can be a time-varying large-signal voltage \(V_L(t)\) and that \(v=v(t)\), a function of time. Then \[ i(t)=g(t)v(t) \] where \(g(t)=\frac{d}{dV}f(V)|_{V=V_L(t)}\)

The time-varying conductance \(g(t)\), is the derivative of the element's \(I/V\) characteristic at the large-signal voltage

By an analogous derivation, one could have a current-controlled resistor with the \(V/I\) characteristic \(V = f_R(I)\) and obtain the small-signal \(v/i\) relation \[ v(t) = r(t)i(t) \] where \(r(t) = \frac{d}{dI}f_R(I)|_{I=I_L(t)}\)

A nonlinear element excited by two tones supports currents and voltages at mixing frequencies \(m\omega_1+n\omega_2\), where \(m\) and \(n\) are integers. If one of those tones, \(\omega_1\) has such a low level that it does not generate harmonics and the other is a large-signal sinusoid at \(\omega_p\), then the mixing frequencies are \(\omega=\pm\omega_1+n\omega_p\), which shown in below figure

image-20231108223600922

A more compact representation of the mixing frequencies is \[ \omega_n=\omega_0+n\omega_p \] which includes only half of the mixing frequencies:

  • the negative components of the lower sidebands (LSB)
  • and the positive components of the upper sidebands (USB)

image-20220511211336437

For real signal, positive- and negative-frequency components are complex conjugate pairs

Conversion Matrix as Bridge

The frequency-domain currents and voltages in a time-varying circuit element are related by a conversion matrix

The small-signal voltage and current can be expressed in the frequency notation as \[ v'(t) = \sum_{n=-\infty}^{\infty}V_ne^{j\omega_nt} \] and \[ i'(t) = \sum_{n=-\infty}^{\infty}I_ne^{j\omega_nt} \] where \(v'(t)\) and \(i'(t)\) are not Fourier series due to \(\omega_n=\omega_0+n\omega_p\)

The conductance waveform \(g(t)\) can be expressed by its Fourier series \[ g(t)=\sum_{n=-\infty}^{\infty}G_ne^{jn\omega_pt} \] Which is harmonics of large-signal \(\omega_p\)

Then the voltage and current are related by Ohm's law \[ i'(t)=g(t)v'(t) \] Then \[ \sum_{k=-\infty}^{\infty}I_ke^{j\omega_kt}=\sum_{n=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}G_nV_me^{j\omega_{m+n}t} \]

A linear periodically-varying transfer function implements frequency translation

Linear Time Varying

The response of a relaxed LTV system at a time \(t\) due to an impulse applied at a time \(t − \tau\) is denoted by \(h(t, \tau)\)

The first argument in the impulse response denotes the time of observation.

The second argument indicates that the system was excited by an impulse launched at a time \(\tau\) prior to the time of observation.

Thus, the response of an LTV system not only depends on how long before the observation time it was excited by the impulse but also on the observation instant.

The output \(y(t)\) of an initially relaxed LTV system with impulse response \(h(t, \tau)\) is given by the convolution integral \[ y(t) = \int_0^{\infty}h(t,\tau)x(t-\tau)d\tau \] Assuming \(x(t) = e^{j2\pi f t}\) \[ y(t) = \int_0^{\infty}h(t,\tau)e^{j2\pi f (t-\tau)}d\tau = e^{j2\pi f t}\int_0^{\infty}h(t,\tau)e^{-j2\pi f\tau}d\tau \] The (time-varying) frequency response can be interpreted as \[ H(j2\pi f, t) = \int_0^{\infty}h(t,\tau)e^{-j2\pi f\tau}d\tau \] Linear Periodically Time-Varying (LPTV) Systems, which is a special case of an LTV system whose impulse response satisfies \[ h(t, \tau) = h(t+T_s, \tau) \] In other words, the response to an impulse remains unchanged if the time at which the output is observed (\(t\)) and the time at which the impulse is applied (denoted by \(t_1\)) are both shifted by \(T_s\) \[ H(j2\pi f, t+T_s) = \int_0^{\infty}h(t+T_s,\tau)e^{-j2\pi f\tau}d\tau = \int_0^{\infty}h(t,\tau)e^{-j2\pi f\tau}d\tau = H(j2\pi f, t) \] \(H(j2\pi f, t)\) of an LPTV system is periodic with timeperiod \(T_s\), it can be expanded as a Fourier series in \(t\), resulting in \[ H(j2\pi f, t) = \sum_{k=-\infty}^{\infty} H_k(j2\pi f)e^{j2\pi f_s k t} \] The coefficients of the Fourier series \(H_k(j2\pi f)\) are given by \[ H_k(j2\pi f) = \frac{1}{T_s}\int_0^{T_s} H(j2\pi f, t) e^{-j2\pi k f_s t}dt \]

reference

K. S. Kundert, "Introduction to RF simulation and its application," in IEEE Journal of Solid-State Circuits, vol. 34, no. 9, pp. 1298-1319, Sept. 1999, doi: 10.1109/4.782091. [pdf]

Stephen Maas, Nonlinear Microwave and RF Circuits, Second Edition , Artech, 2003.

Karti Mayaram. ECE 521 Fall 2016 Analog Circuit Simulation: Simulation of Radio Frequency Integrated Circuits [pdf1, pdf2]

The Value Of RF Harmonic Balance Analyses For Analog Verification: Frequency domain periodic large and small signal analyses. [https://semiengineering.com/the-value-of-rf-harmonic-balance-analyses-for-analog-verification/]

Shanthi Pavan, "Demystifying Linear Time Varying Circuits"

S. Pavan and G. C. Temes, "Reciprocity and Inter-Reciprocity: A Tutorial— Part I: Linear Time-Invariant Networks," in IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 70, no. 9, pp. 3413-3421, Sept. 2023, doi: 10.1109/TCSI.2023.3276700.

S. Pavan and G. C. Temes, "Reciprocity and Inter-Reciprocity: A Tutorial—Part II: Linear Periodically Time-Varying Networks," in IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 70, no. 9, pp. 3422-3435, Sept. 2023, doi: 10.1109/TCSI.2023.3294298.

S. Pavan and R. S. Rajan, "Interreciprocity in Linear Periodically Time-Varying Networks With Sampled Outputs," in IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 61, no. 9, pp. 686-690, Sept. 2014, doi: 10.1109/TCSII.2014.2335393.

Prof. Shanthi Pavan. Introduction to Time - Varying Electrical Network. [https://youtube.com/playlist?list=PLyqSpQzTE6M8qllAtp9TTODxNfaoxRLp9]

Shanthi Pavan. EE5323: Advanced Electrical Networks (Jan-May. 2015) [https://www.ee.iitm.ac.in/vlsi/courses/ee5323/start]

R. S. Ashwin Kumar. EE698W: Analog circuits for signal processing [https://home.iitk.ac.in/~ashwinrs/2022_EE698W.html]

Piet Vanassche, Georges Gielen, and Willy Sansen. 2009. Systematic Modeling and Analysis of Telecom Frontends and their Building Blocks (1st. ed.). Springer Publishing Company, Incorporated.

Beffa, Federico. (2023). Weakly Nonlinear Systems. 10.1007/978-3-031-40681-2.

Wereley, Norman. (1990). Analysis and control of linear periodically time varying systems.

Hameed, S. (2017). Design and Analysis of Programmable Receiver Front-Ends Based on LPTV Circuits. UCLA. ProQuest ID: Hameed_ucla_0031D_15577. Merritt ID: ark:/13030/m5gb6zcz. Retrieved from https://escholarship.org/uc/item/51q2m7bx

Matt Allen. Introduction to Linear Time Periodic Systems. [https://youtu.be/OCOkEFDQKTI]

Fivel, Oren. "Analysis of Linear Time-Varying & Periodic Systems." arXiv preprint arXiv:2202.00498 (2022).

RF Harmonic Balance Analysis for Nonlinear Circuits [https://resources.pcb.cadence.com/blog/2019-rf-harmonic-balance-analysis-for-nonlinear-circuits]

Steer, Michael. Microwave and RF Design (Third Edition, 2019). NC State University, 2019.

Steer, Michael. Harmonic Balance Analysis of Nonlinear RF Circuits - Case Study Index: CS_AmpHB [link]

Vishal Saxena, "SpectreRF Periodic Analysis" URL:https://www.eecis.udel.edu/~vsaxena/courses/ece614/Handouts/SpectreRF%20Periodic%20Analysis.pdf

Josh Carnes,Peter Kurahashi. "Periodic Analyses of Sampled Systems" URL:https://slideplayer.com/slide/14865977/

Kundert, Ken. (2006). Simulating Switched-Capacitor Filters with SpectreRF. URL:https://designers-guide.org/analysis/sc-filters.pdf

Article (20482538) Title: Why is pnoise sampled(jitter) different than pnoise timeaverage on a driven circuit? URL:https://support.cadence.com/apex/ArticleAttachmentPortal?id=a1O0V000009EStcUAG

Article (20467468) Title: The mathematics behind choosing the upper frequency when simulating pnoise jitter on an oscillator URL:https://support.cadence.com/apex/ArticleAttachmentPortal?id=a1O0V000007MnBUUA0

Andrew Beckett. "Simulating phase noise for PFD-CP". URL:https://groups.google.com/g/comp.cad.cadence/c/NPisXTElx6E/m/XjWxKbbfh2cJ

Dr. Yanghong Huang. MATH44041/64041: Applied Dynamical Systems [https://personalpages.manchester.ac.uk/staff/yanghong.huang/teaching/MATH4041/default.htm]

Jeffrey Wong. Math 563, Spring 2020 Applied computational analysis [https://services.math.duke.edu/~jtwong/math563-2020/main.html]

Jeffrey Wong. Math 353, Fall 2020 Ordinary and Partial Differential Equations [https://services.math.duke.edu/~jtwong/math353-2020/main.html]

Tip of the Week: Please explain in more practical (less theoretical) terms the concept of "oscillator line width." [https://community.cadence.com/cadence_blogs_8/b/rf/posts/please-explain-in-more-practical-less-theoretical-terms-the-concept-of-quot-oscillator-line-width-quot]

Rubiola, E. (2008). Phase Noise and Frequency Stability in Oscillators (The Cambridge RF and Microwave Engineering Series). Cambridge: Cambridge University Press. doi:10.1017/CBO9780511812798

Dr. Ulrich L. Rohde, Noise Analysis, Then and Today [https://www.microwavejournal.com/articles/29151-noise-analysis-then-and-today]

Nicola Da Dalt and Ali Sheikholeslami. 2018. Understanding Jitter and Phase Noise: A Circuits and Systems Perspective (1st. ed.). Cambridge University Press, USA.

Kester, Walt. (2005). Converting Oscillator Phase Noise to Time Jitter. [https://www.analog.com/media/en/training-seminars/tutorials/MT-008.pdf]

Drakhlis, B.. (2001). Calculate oscillator jitter by using phase-noise analysis: Part 2 of two parts. Microwaves and Rf. 40. 109-119.

Explanation for sampled PXF analysis. [https://community.cadence.com/cadence_technology_forums/f/custom-ic-design/45055/explanation-for-sampled-pxf-analysis/1376140#1376140]

模拟放大器的低噪声设计技术2-3实践 아날로그 증폭기의 저잡음 설계 기법2-3실습 [https://youtu.be/vXLDfEWR31k?si=p2gbwaKoTxtfvrTc]

2.2 How POP Really Works [https://www.simplistechnologies.com/documentation/simplis/ast_02/topics/2_2_how_pop_really_works.htm]

Jaeha Kim. Lecture 11. Mismatch Simulation using PNOISE [https://ocw.snu.ac.kr/sites/default/files/NOTE/7037.pdf]

Hueber, G., & Staszewski, R. B. (Eds.) (2010). Multi-Mode/Multi-Band RF Transceivers for Wireless Communications: Advanced Techniques, Architectures, and Trends. John Wiley & Sons. https://doi.org/10.1002/9780470634455

B. Boser,A. Niknejad ,S.Gambin 2011 EECS 240 Topic 6: Noise Analysis [https://mixsignal.files.wordpress.com/2013/06/t06noiseanalysis_simone-1.pdf]

0%