dBc

TODO 📅

dBFS

TODO 📅

Nyquist rate & Nyquist frequency

  • Nyquist rate

    The Nyquist rate is the minimum sample rate required to accurately measure a signal's highest frequency. It's equal to twice the highest frequency of the signal

  • Nyquist frequency

    The Nyquist frequency is the highest frequency that can be represented without aliasing in a discrete signal. It's equal to half the sampling frequency

https://upload.wikimedia.org/wikipedia/commons/d/d8/Nyquist_frequency_%26_rate.svg

Oversampling Ratio (OSR) is defined as the ratio of the Nyquist frequency \(f_s/2\) to the signal bandwidth \(B\) given by \(\text{OSR}=f_s/2B\)

Summation & Integration

impulse response Transform ROC
Summation \(u(t)\) \(\frac{1}{s}\) \(\mathfrak{Re}\{s\}\gt 0\)
Integration \(u[n]\) \(\frac{1}{1-z^{-1}}\) \(|z| \gt 1\)

both are NOT stable

sinc filter

image-20241002143413907

where \(W\) is sampling frequency in Hz

sinc.drawio

image-20241002143219224

Zero-order hold (ZOH)

image-20240928101832121 \[ h_{ZOH}(t) = \text{rect}(\frac{t}{T} - \frac{1}{2}) = \left\{ \begin{array}{cl} 1 & : \ 0 \leq t \lt T \\ 0 & : \ \text{otherwise} \end{array} \right. \] The effective frequency response is the continuous Fourier transform of the impulse response \[ H_{ZOH}(f) = \mathcal{F}\{h_{ZOH}(t)\} = T\frac{1-e^{j2\pi fT}}{j2\pi fT}=Te^{-j\pi fT}\text{sinc}(fT) \] where \(\text{sinc}(x)\) is the normalized sinc function \(\frac{\sin(\pi x)}{\pi x}\)

The Laplace transform transfer function of the ZOH is found by substituting \(s=j2\pi f\) \[ H_{ZOH}(s) = \mathcal{L}\{h_{ZOH}(t)\}=\frac{1-e^{-sT}}{s} \]

image-20240928103227690

frequency convention

  • radian frequency \(\omega_0\) in rad/s
  • cyclic frequency \(f_0\) in Hz

Energy signals vs Power signal

Topic 5 Energy & Power Signals, Correlation & Spectral Density [https://www.robots.ox.ac.uk/~dwm/Courses/2TF_2021/N5.pdf]


image-20240427155046131

image-20240719203550628



image-20240427155100927

image-20240719204148098

modulation & demodulation

image-20240826221237312

image-20240826221251379

Hossein Hashemi, RF Circuits, [https://youtu.be/0f3yZMvD2Jg?si=2c1Q4y6WJq8Jj8oN]

Coherent Sampling

To avoid spectral leakage completely, the method of coherent sampling is recommended. Coherent sampling requires that the input- and clock-frequency generators are phase locked, and that the input frequency be chosen based on the following relationship: \[ \frac{f_{\text{in}}}{f_{\text{s}}}=\frac{M_C}{N_R} \]

where:

  • \(f_{\text{in}}\) = the desired input frequency
  • \(f_s\) = the clock frequency of the data converter under test
  • \(M_C\) = the number of cycles in the data window (to make all samples unique, choose odd or prime numbers)
  • \(N_R\) = the data record length (for an 8192-point FFT, the data record is 8192s long)

\[\begin{align} f_{\text{in}} &=\frac{f_s}{N_R}\cdot M_C \\ &= f_{\text{res}}\cdot M_C \end{align}\]


irreducible ratio

An irreducible ratio ensures identical code sequences not to be repeated multiple times. Unnecessary repetition of the same code is not desirable as it increases ADC test time.

Given that \(\frac{M_C}{N_R}\) is irreducible, and \(N_R\) is a power of 2, an odd number for \(M_C\) will always produce an irreducible ratio

Assuming there is a common factor \(k\) between \(M_C\) and \(N_R\), i.e. \(\frac{M_C}{N_R}=\frac{k M_C'}{k N_R'}\)

The samples (\(n\in[1, N_R]\))

\[\begin{align} y[n] &= \sin\left( \omega_{\text{in}} \cdot t_n \right) \\ &= \sin\left( \omega_{\text{in}} \cdot n\frac{1}{f_s} \right) \\ & = \sin\left( \omega_{\text{in}} \cdot n\frac{1}{f_{\text{in}}}\frac{M_C}{N_R} \right) \\ & = \sin\left( 2\pi n\frac{M_C}{N_R} \right) \end{align}\]

Then

\[\begin{align} y[n+N_R'] &= \sin\left( 2\pi (n+N_R')\frac{M_C}{N_R} \right) \\ & = \sin\left( 2\pi n \frac{M_C}{N_R} + 2\pi N_R'\frac{M_C}{N_R}\right) \\ & = \sin\left( 2\pi n \frac{M_C}{N_R} + 2\pi N_R'\frac{kM_C'}{kN_R'} \right) \\ & = \sin\left( 2\pi n \frac{M_C}{N_R} + 2\pi M_C' \right) \\ & = \sin\left( 2\pi n \frac{M_C}{N_R}\right) \end{align}\]

So, the samples is repeated \(y[n] = y[n+N_R']\). Usually, no additional information is gained by repeating with the same sampling points.


Example \[ N \cdot \frac{1}{F_s} = M \cdot \frac{1}{F_{in}} \]

where \(F_s\) is sample frequency, \(F_{in}\) input signal frequency.

And \(N\) often is 256, 512; M is 3, 5, 7, 11.

channel loss

  • skin effect loss
  • dielectric loss

image-20240810102618245

phase delay & group delay

image-20240810094519487

  • Phase delay directly measures the device or system time delay of individual sinusoidal frequency components in the steady-state conditions.
  • In the ideal case the envelope delay is equal to the phase delay
  • envelope delay is a more sensitive measure of aberrations than phase delay

phase delay

image-20240808212730768

If the phase delay peaks (exceeds the low-frequency value) you can expect to see high-frequency components late in the step response. This causes ringing.

group delay

image-20240808213806803

image-20240808220657443

image-20240808220740349


steady-state at this frequency is a polarity flip; a 180 degrees phase shift; which is a transfer function of H(s)=-1. \[ H(s) = e^{j\pi} \] That is \(\phi(\omega) = \pi\) \[ \tau_p = \frac{\pi}{\omega} \] and \[ \tau_g = \frac{\partial \pi}{\partial \omega}=0 \]


Hollister, Allen L. Wideband Amplifier Design. Raleigh, NC: SciTech Pub., 2007.

Pupalaikis, Peter. (2006). Group Delay and its Impact on Serial Data Transmission and Testing. [https://cdn.teledynelecroy.com/files/whitepapers/group_delay-designcon2006.pdf]

[Pupalaikis et al., “Eye Patterns in Scopes”, DesignCon, Santa Clara CA, 2005https://cdn.teledynelecroy.com/files/whitepapers/eye_patterns_in_scopes-designcon_2005.pdf]

Starič, P. & Margan, E.. (2006). Wideband Amplifiers. 10.1007/978-0-387-28341-8.

Alan V. Oppenheim, Alan S. Willsky, and S. Hamid Nawab. 1996. Signals & systems (2nd ed.). Prentice-Hall, Inc., USA.

Phase delay vs group delay: Common misconceptions. [https://audiosciencereview.com/forum/index.php?threads/phase-delay-vs-group-delay-common-misconceptions.39591/]

Feedback Rearrange

loop-refactor.drawio

The closed loop transfer function of \(Y/X\) and \(Y_1/X_1\) are almost same, except sign

\[\begin{align} \frac{Y}{X} &= +\frac{H_1(s)H_2(s)}{1+H_1(s)H_2(s)} \\ \frac{Y_1}{X_1} &= -\frac{H_1(s)H_2(s)}{1+H_1(s)H_2(s)} \end{align}\]

loop-refactor-partion.drawio

define \(-Y_1=Y_n\), then \[ \frac{Y_n}{X_1} = \frac{H_1(s)H_2(s)}{1+H_1(s)H_2(s)} \] loop-refactor-partion-general.drawio

image-20240805231921946

Saurabh Saxena, IIT Madras. CICC2022 Clocking for Serial Links - Frequency and Jitter Requirements, Phase-Locked Loops, Clock and Data Recovery

Convolution of probability distributions

The probability distribution of the sum of two or more independent random variables is the convolution of their individual distributions.

image-20240804104528903

Thermal noise

Thermal noise in an ideal resistor is approximately white, meaning that its power spectral density is nearly constant throughout the frequency spectrum.

When limited to a finite bandwidth and viewed in the time domain, thermal noise has a nearly Gaussian amplitude distribution

image-20240804102454281

Barkhausen criteria

Barkhausen criteria are necessary but not sufficient conditions for sustainable oscillations

image-20240720090654883

it simply "latches up" rather than oscillates

NRZ Bandwidth

image-20240607221359970

Maxim Integrated,NRZ Bandwidth - HF Cutoff vs. SNR [https://pdfserv.maximintegrated.com/en/an/AN870.pdf]

\(0.35/T_r\)

image-20240607222440796

\(0.5/T_r\)

TODO 📅

System Type

Control of Steady-State Error to Polynomial Inputs: System Type

image-20240502232125317

control systems are assigned a type number according to the maximum degree of the input polynominal for which the steady-state error is a finite constant. i.e.

  • Type 0: Finite error to a step (position error)
  • Type 1: Finite error to a ramp (velocity error)
  • Type 2: Finite error to a parabola (acceleration error)

The open-loop transfer function can be expressed as \[ T(s) = \frac{K_n(s)}{s^n} \]

where we collect all the terms except the pole (\(s\)) at eh origin into \(K_n(s)\),

The polynomial inputs, \(r(t)=\frac{t^k}{k!} u(t)\), whose transform is \[ R(s) = \frac{1}{s^{k+1}} \]

Then the equation for the error is simply \[ E(s) = \frac{1}{1+T(s)}R(s) \]

Application of the Final Value Theorem to the error formula gives the result

\[\begin{align} \lim _{t\to \infty} e(t) &= e_{ss} = \lim _{s\to 0} sE(s) \\ &= \lim _{s\to 0} s\frac{1}{1+\frac{K_n(s)}{s^n}}\frac{1}{s^{k+1}} \\ &= \lim _{s\to 0} \frac{s^n}{s^n + K_n}\frac{1}{s^k} \end{align}\]

  • if \(n > k\), \(e=0\)
  • if \(n < k\), \(e\to \infty\)
  • if \(n=k\)
    • \(e_{ss} = \frac{1}{1+K_n}\) if \(n=k=0\)
    • \(e_{ss} = \frac{1}{K_n}\) if \(n=k \neq 0\)

where we define \(K_n(0) = K_n\)

Nyquist's Stability Criterion

TODO 📅

[Michael H. Perrott, High Speed Communication Circuits and Systems, Lecture 15 Integer-N Frequency Synthesizers]

Spectral content of NRZ

image-20231111100420675

image-20231111101322771

image-20231110224237933

Lecture 26 Autocorrelation Functions of Random Binary Processes [https://bpb-us-w2.wpmucdn.com/sites.gatech.edu/dist/a/578/files/2003/12/ECE3075A-26.pdf]

Lecture 32 Correlation Functions & Power Density Spectrum, Cross-spectral Density [https://bpb-us-w2.wpmucdn.com/sites.gatech.edu/dist/a/578/files/2003/12/ECE3075A-32.pdf]

sinusoidal steady-state and frequency response

image-20231104104933781

image-20231104104946203

image-20231104105056345

image-20231104105139814

image-20231104105223549

Due to KCL and \(u(t)=e^{j\omega t}\) and \(y(t)=H(j\omega)e^{j\omega t}\), we have ODE:

\[\begin{align} \frac{u(t) - y(t)}{R} = C \frac{dy(t)}{dt} \\ e^{j\omega t} - H(j\omega) e^{j\omega t} = H(j\omega)\cdot j\omega e^{j\omega t} \\ \end{align}\]

\(H(j\omega)\) is obtained as below \[ H(j\omega) = \frac{1}{1+j\omega} \]

image-20231104135855739

Different Variants of the PSD Definition

In the practice of engineering, it has become customary to use slightly different variants of the PSD definition, depending on the particular application or research field.

  • Two-Sided PSD, \(S_x(f)\)

    this is a synonym of the PSD defined as the Fourier Transform of the autocorrelation.

  • One-Sided PSD, \(S'_x(f)\)

    this is a variant derived from the two-sided PSD by considering only the positive frequency semi-axis.

    To conserve the total power, the value of the one-sided PSD is twice that of the two-sided PSD \[ S'_x(f) = \left\{ \begin{array}{cl} 0 & : \ f \geq 0 \\ S_x(f) & : \ f = 0 \\ 2S_x(f) & : \ f \gt 0 \end{array} \right. \]

image-20230603185546658

Note that the one-sided PSD definition makes sense only if the two-sided is an even function of \(f\)

If \(S'_x(f)\) is even symmetrical around a positive frequency \(f_0\), then two additional definitions can be adopted:

  • Single-Sideband PSD, \(S_{SSB,x}(f)\)

    This is obtained from \(S'_x(f)\) by moving the origin of the frequency axis to \(f_0\) \[ S_{SSB,x}(f) =S'_x(f+f_0) \] This concept is particularly useful for describing phase or amplitude modulation schemes in wireless communications, where \(f_0\) is the carrier frequency.

    Note that there is no difference in the values of the one-sided versus the SSB PSD; it is just a pure translation on the frequency axis.

  • Double-Sideband PSD, \(S_{DSB,x}(f)\)

    this is a variant of the SSB PSD obtained by considering only the positive frequency semi-axis.

    As in the case of the one-sided PSD, to conserve total power, the value of the DSB PSD is twice that of the SSB \[ S_{DSB,x}(f) = \left\{ \begin{array}{cl} 0 & : \ f \geq 0 \\ S_{SSB,x}(f) & : \ f = 0 \\ 2S_{SSB,x}(f) & : \ f \gt 0 \end{array} \right. \]

image-20230603222054506

Note that the DSB definition makes sense only if the SSB PSD is even symmetrical around zero

Poles and Zeros of transfer function

poles

\[ H(s) = \frac{1}{1+s/\omega_0} \]

magnitude and phase at \(\omega_0\) and \(-\omega_0\) \[\begin{align} H(j\omega_0) &= \frac{1}{1+j} = \frac{1}{\sqrt{2}}e^{-j\pi/4} \\ H(-j\omega_0) &= \frac{1}{1-j} = \frac{1}{\sqrt{2}}e^{j\pi/4} \end{align}\]

system response \(y(t)\) of input \(\cos(\omega_0 t)\), note \(\cos(\omega_0t) = \frac{1}{2}(e^{j\omega_0 t} + e^{-j\omega_0 t})\) \[\begin{align} y(t) &= H(j\omega_0)\cdot \frac{1}{2}e^{j\omega_0 t} + H(-j\omega_0)\cdot \frac{1}{2}e^{-j\omega_0 t} \\ &= \frac{1}{\sqrt{2}}\cos(\omega_0t-\pi/4) \end{align}\]

\(\cos(\omega_0 t)\), with frequency same with pole DON'T have infinite response

That is, pole indicate decrease trending

zeros

similar with poles, \(\cos(\omega_0 t)\), with frequency same with zero DON'T have zero response

\[ H(s) = 1+s/\omega_0 \]

magnitude and phase at \(\omega_0\) and \(-\omega_0\) \[\begin{align} H(j\omega_0) &= 1+j = \sqrt{2}e^{j\pi/4} \\ H(-j\omega_0) &= 1-j = \sqrt{2}e^{-j\pi/4} \end{align}\]

system response \(y(t)\) of input \(\cos(\omega_0 t)\), note \(\cos(\omega_0t) = \frac{1}{2}(e^{j\omega_0 t} + e^{-j\omega_0 t})\) \[\begin{align} y(t) &= H(j\omega_0)\cdot \frac{1}{2}e^{j\omega_0 t} + H(-j\omega_0)\cdot \frac{1}{2}e^{-j\omega_0 t} \\ &= \sqrt{2}\cos(\omega_0t+\pi/4) \end{align}\]

baud rate

symbol rate, modulation rate or baud rate is the number of symbol changes per unit of time.

  • Bit rate refers to the number of bits transmitted between two devices per unit of time
  • The baud or symbol rate refers to the number of symbols that can be sent in the same amount of time

reference

Stephen P. Boyd. EE102 Lecture 10 Sinusoidal steady-state and frequency response [https://web.stanford.edu/~boyd/ee102/freq.pdf]

Gene F. Franklin, J. David Powell, and Abbas Emami-Naeini. 2018. Feedback Control of Dynamic Systems (8th Edition) (8th. ed.). Pearson.

Inter-Symbol Interference (or Leaky Bits) [http://blog.teledynelecroy.com/2018/06/inter-symbol-interference-or-leaky-bits.html]

[AN001] Designing from zero an IIR filter in Verilog using biquad structure and bilinear discretization. URL:[https://www.controlpaths.com/articles/an001_designing_iir_biquad_filter_bilinear/]

Frequency warping using the bilinear transform. URL:[https://www.controlpaths.com/2022/05/09/frequency-warping-using-the-bilinear-transform/]

Digital control loops. Theoretical approach. URL:[https://www.controlpaths.com/2022/02/28/digital-control-loops-theoretical-approach/]

Simulation of DSP algorithms in Verilog. URL:[https://www.controlpaths.com/2023/05/20/simulation-of-dsp-algorithms-in-verilog/]

Implementing a digital biquad filter in Verilog. URL:[https://www.controlpaths.com/2021/04/19/implementing-a-digital-biquad-filter-in-verilog/]

Implementing a FIR filter using folding. URL:[https://www.controlpaths.com/2021/05/17/implementing-a-fir-filter-using-folding/]

Oppenheim, Alan V. and Cram. “Discrete-time signal processing : Alan V. Oppenheim, 3rd edition.” (2011).

Extras: PID Compensator with Bilinear Approximation URL:[https://ctms.engin.umich.edu/CTMS/index.php?aux=Extras_PIDbilin]

ADC SNR with Clock Jitter

Chembiyan T "SNR of an ADC in the presence of clock jitter" [https://www.linkedin.com/posts/chembiyan-t-0b34b910_adcsnrjitter-activity-7171178121021304833-f2Wd?utm_source=share&utm_medium=member_desktop]

Unlike the quantization noise and the thermal noise, the impact of the clock jitter on the ADC performance depends on the input signal properties like its PSD

image-20241123205352661

The error between the ideal sampled signal and the sampling with clock jitter can be treated as noise and it results in the degradation of the SNR of the ADC

image-20241124004634365

K. Tyagi and B. Razavi, "Performance Bounds of ADC-Based Receivers Due to Clock Jitter," in IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 70, no. 5, pp. 1749-1753, May 2023 [https://www.seas.ucla.edu/brweb/papers/Journals/KT_TCAS_2023.pdf]

N. Da Dalt, M. Harteneck, C. Sandner and A. Wiesbauer, "On the jitter requirements of the sampling clock for analog-to-digital converters," in IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 49, no. 9, pp. 1354-1360, Sept. 2002 [https://sci-hub.se/10.1109/TCSI.2002.802353]

ISF for Oscillators

TODO 📅

image-20241113232703941

Sampled Thermal Noise

The aliasing of the noise, or noise folding, plays an important role in switched-capacitor as it does in all switched-capacitor filters

image-20240425215938141

Assume for the moment that the switch is always closed (that there is no hold phase), the single-sided noise density would be

image-20240428182816109


image-20240428180635082

\(v_s[n]\) is the sampled version of \(v_{RC}(t)\), i.e. \(v_s[n]= v_{RC}(nT_C)\) \[ S_s(e^{j\omega}) = \frac{1}{T_C} \sum_{k=-\infty}^{\infty}S_{RC}(j(\frac{\omega}{T_C}-\frac{2\pi k}{T_C})) \cdot d\omega \] where \(\omega \in [-\pi, \pi]\), furthermore \(\frac{d\omega}{T_C}= d\Omega\) \[ S_s(j\Omega) = \sum_{k=-\infty}^{\infty}S_{RC}(j(\Omega-k\Omega_s)) \cdot d\Omega \]

image-20240428215559780

image-20240425220033340

The noise in \(S_{RC}\) is a stationary process and so is uncorrelated over \(f\) allowing the \(N\) rectangles to be combined by simply summing their noise powers

image-20240428225949327

image-20240425220400924

where \(m\) is the duty cycle


Below analysis focus on sampled noise

image-20240427183257203

image-20240427183349642

image-20240427183516540

image-20240427183458649

  • Calculate autocorrelation function of noise at the output of the RC filter
  • Calculate the spectrum by taking the discrete time Fourier transform of the autocorrelation function

image-20240427183700971

Kundert, Ken. (2006). Simulating Switched-Capacitor Filters with SpectreRF [https://designers-guide.org/analysis/sc-filters.pdf]

Pavan, Schreier and Temes, "Understanding Delta-Sigma Data Converters, Second Edition" ISBN 978-1-119-25827-8

Boris Murmann, EE315B VLSI Data Conversion Circuits, Autumn 2013

- Noise Analysis in Switched-Capacitor Circuits, ISSCC 2011 / tutorials [slides, transcript]

Tania Khanna, ESE568 Fall 2019, Mixed Signal Circuit Design and Modeling URL: https://www.seas.upenn.edu/~ese568/fall2019/

Matt Pharr, Wenzel Jakob, and Greg Humphreys. 2016. Physically Based Rendering: From Theory to Implementation (3rd. ed.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.

Bernhard E. Boser . Advanced Analog Integrated Circuits Switched Capacitor Gain Stages [https://people.eecs.berkeley.edu/~boser/courses/240B/lectures/M05%20SC%20Gain%20Stages.pdf]

R. Gregorian and G. C. Temes. Analog MOS Integrated Circuits for Signal Processing. Wiley-Interscience, 1986

Trevor Caldwell, Lecture 9 Noise in Switched-Capacitor Circuits [http://individual.utoronto.ca/trevorcaldwell/course/NoiseSC.pdf]

Christian-Charles Enz. High precision CMOS micropower amplifiers [pdf]

spectrum analyzer

We tried to plot a power spectral density together with something that we want to interpret as a power spectrum

  • spectrum of a periodic signal
  • spectral density of a broadband signal such as noise

Sine-wave components are located in individual FFT bins, but broadband signals like noise have their power spread over all FFT bins!

The noise floor depends on the length of the FFT

[http://individual.utoronto.ca/schreier/lectures/2015/1.pdf]

image-20240522214004545

signal tone power \[ P_{\text{sig}} = 2 \frac{X_{w,sig}^2}{S_1^2} \]

noise power \[ P_n = \frac{X_{w,n}^2}{S_2} \]

Then, displayed SNR is obtained \[\begin{align} \mathrm{SNR} &= 10\log10\left(\frac{X_{w,sig}^2}{X_{w,n}^2}\right) \\ &= 10\log_{10}\left(\frac{P_{\text{sig}}}{P_n}\right) + 10\log_{10}\left(\frac{S_1^2}{2S_2}\right) \\ &= \mathrm{SNR}'-10\log_{10}\left(\frac{2S_2}{S_1^2}\right) \\ &= \mathrm{SNR}'-10\log_{10}(2\cdot\mathrm{NBW}) \\ \end{align}\]

DFT's output \(\mathrm{SNR}\)

1
2
3
4
5
6
for N=[2^6 2^8 2^10 2^12]
wd = rectwin(N);
nbw = enbw(wd)/N;
snr_shift = 10*log10(nbw * 2);
disp(snr_shift);
end

output:

1
2
3
4
5
6
7
-15.0515

-21.0721

-27.0927

-33.1133

image-20240522214340882

The solution to the scaling problem in the case of a PSD obtained from a sine-wave scaled FFT is similarly simple. All we need do is provide the value of NBW

APPENDIX A - SPECTRAL ESTIMATION - A.2 Scaling and Noise Bandwidth

Pavan, Shanthi, Richard Schreier, and Gabor Temes. (2016) 2016. Understanding Delta-Sigma Data Converters. 2nd ed. Wiley.

  • For a filter with infinitely steep roll-off, the noise bandwidth (NBW) is equal to the filter's bandwidth,
  • while for a filter with a single-pole roll-off, NBW is 2 times the 3-dB bandwidth

reference

David Herres, The difference between signal under-sampling, aliasing, and folding URL: https://www.testandmeasurementtips.com/the-difference-between-signal-under-sampling-aliasing-and-folding-faq/

Pharr, Matt; Humphreys, Greg. (28 June 2010). Physically Based Rendering: From Theory to Implementation. Morgan Kaufmann. ISBN 978-0-12-375079-2. Chapter 7 (Sampling and reconstruction)

Alan V Oppenheim, Ronald W. Schafer. Discrete-Time Signal Processing, 3rd edition

1
2
3
4
5
6
7
8
git clone git@github.com:mkubecek/vmware-host-modules.git
cd vmware-host-modules/
git checkout origin/workstation-17.0.2
make -j`nproc`
sudo make install
sudo modprobe -v vmmon
sudo modprobe -v vmnet
sudo vmware-networks --start

This cascode compensation topology is popularly known as ahuja compensation

The cause of the positive zero is the feedforward current through \(C_m\).

To abolish this zero, we have to cut the feedforward path and create a unidirectional feedback through \(C_m\).

  1. Adding a resistor(nulling resistor) is one way to mitigate the effect of the feedforward current.

  2. Another approach uses a current buffer cascode to pass the small-signal feedback current but cut the feedforward current

People name this approach after the author Ahuja

The benefits of Ahuja compensation over Miller compensation are severa

  • better PSRR

  • higher unity-gain bandwidth using smaller compensation capacitor

  • ability to cope better with heavy capacitive and resistive loads

Miller's approximation

image-20240130224043511

Right-Half-Plane Zero

\[ \left[(v_i - v_o)sC_c - g_m v_i\right]R_o = v_o \] Then \[ \frac{v_o}{v_i} = -g_mR_o\frac{1-s\frac{C_c}{g_m}}{1+sR_oC_c} \] right-half-plane Zero \(\omega _z = \frac{g_m}{C_c}\)

Equivalent cap

The amplifier gain magnitude \(A_v = g_m R_o\) \[ I_\text{c,in} = (v_i - v_o)sC_c \] Then \[\begin{align} I_\text{c,in} &= (v_i + A_v v_i)sC_c \\ & = v_i s (1+A_v)C_c \end{align}\]

we get \(C_\text{in,eq}= (1+A_v)C_c\simeq A_vC_c\)

Similarly \[\begin{align} I_\text{c,out} &= (v_o - v_i)sC_c \\ & = v_o s (1+\frac{1}{A_v})C_c \end{align}\]

we get \(C_\text{out,eq}= (1+\frac{1}{A_v})C_c\simeq C_c\)

cascode compensation

image-20240817193513058

image-20240817201727109

Of course, , if the capacitance at the gate of \(M_1\) is taken into account, pole splitting is less pronounced.


including \(r_\text{o2}\)

image-20240819202642809 \[ \frac{V_{out}}{I_{in}} \approx \frac{-g_{m1}R_SR_L(g_{m2}+C_Cs)}{\frac{R_S+r_\text{o2}}{r_\text{o2}}R_LC_LC_Cs^2+g_{m1}g_{m2}R_LR_SC_Cs+g_{m2}} \] The poles as

\[\begin{align} \omega_{p1} &\approx \frac{1}{g_{m1}R_LR_SC_c} \\ \omega_{p2} &\approx \frac{g_{m2}R_Sg_{m1}}{C_L}\frac{r_\text{o2}}{R_S+r_\text{o2}} \end{align}\]

and zero is not affected, which is \(\omega_z =\frac{g_{m2}}{C_C}\)

the above model simulation result is shown below

image-20240819221653262

the zero is located between two poles

take into the capacitance at the gate of \(M_1\) and all other second-order effect

image-20240819222727276

intuitive analysis of zero

miller compensation

  • zero in the right half plane \[ g_\text{m1}V_P = sC_c V_P \]

cascode compensation

  • zero in the left half plane \[ g_\text{m2}V_X = - sC_c V_X \]

zero_loc.drawio

How to Mitigate Impact of Zero

cascode_compensation

dominant pole \[ \omega_\text{p,d} = \frac {1} {R_\text{eq}g_\text{m9}R_{L}C_{c}} \] first nondominant pole \[ \omega_\text{p,nd} = \frac {g_\text{m4}R_\text{eq}g_\text{m9}} {C_L} \] zero \[ \omega_\text{z} = (g_\text{m4}R_\text{eq})(\frac {g_\text{m9}} {C_c}) \] a much greater magnitude than \(g_\text{m9}/C_C\)

Lectures

EE 240B: Advanced Analog Circuit Design, Prof. Bernhard E. Boser [OTA II, Multi-Stage]

Papers

B. K. Ahuja, "An improved frequency compensation technique for CMOS operational amplifiers," in IEEE Journal of Solid-State Circuits, vol. 18, no. 6, pp. 629-633, Dec. 1983, doi: 10.1109/JSSC.1983.1052012.

D. B. Ribner and M. A. Copeland, "Design techniques for cascoded CMOS op amps with improved PSRR and common-mode input range," in IEEE Journal of Solid-State Circuits, vol. 19, no. 6, pp. 919-925, Dec. 1984, doi: 10.1109/JSSC.1984.1052246.

Abo, Andrew & Gray, Paul. (1999). A 1.5V, 10-bit, 14MS/s CMOS Pipeline Analog-to-Digital Converter.

Book's chapters

Design of analog CMOS integrated circuits, Behzad Razavi

  • 10.5 Compensation of Two-Stage Op Amps
  • 10.7 Other Compensation Techniques

Analog Design Essentials, Willy M.C. Sansen

  • chapter #5 Stability of operational amplifiers - Compensation of positive zero

Analysis and Design of Analog Integrated Circuits 5th Edition, Paul R. Gray, Paul J. Hurst, Stephen H. Lewis, Robert G. Meyer

  • 9.4.3 Two-Stage MOS Amplifier Compensation

CMOS Analog Circuit Design 3rd Edition, Phillip E. Allen, Douglas R. Holberg

  • 6.2.2 Miller Compensation of the Two-Stage Op Amp

reference

B. K. Ahuja, "An Improved Frequency Compensation Technique for CMOS Operational Amplifiers," IEEE 1. Solid-State Circuits, vol. 18, no. 6, pp. 629-633, Dec. 1983.

U. Dasgupta, "Issues in "Ahuja" frequency compensation technique", IEEE International Symposium on Radio-Frequency Integration Technology, 2009.

R. 1. Reay and G. T. A. Kovacs, "An unconditionally stable two-stage CMOS amplifier," IEEE 1. Solid-State Circuits, vol. 30, no. 5, pp. 591- 594, May 1995.

A. Garimella and P. M. Furth, "Frequency compensation techniques for op-amps and LDOs: A tutorial overview," 2011 IEEE 54th International Midwest Symposium on Circuits and Systems (MWSCAS), 2011, pp. 1-4, doi: 10.1109/MWSCAS.2011.6026315.

H. Aminzadeh, R. Lotfi and S. Rahimian, "Design Guidelines for Two-Stage Cascode-Compensated Operational Amplifiers," 2006 13th IEEE International Conference on Electronics, Circuits and Systems, 2006, pp. 264-267, doi: 10.1109/ICECS.2006.379776.

H. Aminzadeh and K. Mafinezhad, "On the power efficiency of cascode compensation over Miller compensation in two-stage operational amplifiers," Proceeding of the 13th international symposium on Low power electronics and design (ISLPED '08), Bangalore, India, 2008, pp. 283-288, doi: 10.1145/1393921.1393995.

Stabilizing a 2-Stage Amplifier URL:https://a2d2ic.wordpress.com/2016/11/10/stabilizing-a-2-stage-amplifier/

overview

image-20240721172721884

image-20240629140001275

\(\omega_d\) called damped natural frequency


closed loop frequency response

image-20240629134127219 \[\begin{align} A &= \frac{\frac{A_0}{(1+s/\omega_1)(1+s/\omega_2)}}{1+\beta \frac{A_0} {(1+s/\omega_1)(1+s/\omega_2)}} \\ &= \frac{A_0}{1+A_0 \beta}\frac{1}{\frac{s^2}{\omega_1\omega_2(1+A_0\beta)}+\frac{1/\omega_1+1/\omega_2}{1+A_0\beta}s+1} \\ &\simeq \frac{A_0}{1+A_0 \beta}\frac{1}{\frac{s^2}{\omega_u\omega_2}+\frac{1}{\omega_u}s+1} \\ &= \frac{A_0}{1+A_0 \beta}\frac{\omega_u\omega_2}{s^2+\omega_2s+\omega_u\omega_2} \end{align}\]

That is \(\omega_n = \sqrt{\omega_u\omega_2}\) and \(\zeta = \frac{1}{2}\sqrt{\frac{\omega_2}{\omega_u}}\)

where \(\omega_u\) is the unity gain bandwidth

image-20240629112429803

where \(f_r\) is resonant frequency, \(\zeta\) is damping ratio, \(P_f\) maximum peaking, \(P_t\) is the peak of the first overshoot (step response)

image-20240629142324982

damping factor & phase margin

  • phase margin is defined for open loop system

  • damping factor (\(\zeta\)) is defined for close loop system

The roughly 90 to 100 times of damping factor (\(\zeta\)​) is phase margin \[ \mathrm{PM} = 90\zeta \sim 100\zeta \] In order to have a good stable system, we want \(\zeta > 0.5\) or phase margin more than \(45^o\)

We can analyze open loop system in a better perspective because it is simpler. So, we always use the loop gain analysis to find the phase margin and see whether the system is stable or not.

additional Zero

\[\begin{align} TF &= \frac{s +\omega_z}{s^2+2\zeta \omega_ns+\omega_n^2} \\ &= \frac{\omega _z}{\omega _n^2}\cdot \frac{1+s/\omega _z}{1+s^2/\omega_n^2+2\zeta s/\omega_n} \end{align}\]

Let \(s=j\omega\) and omit factor, \[ A_\text{dB}(\omega) = 10\log[1+(\frac{\omega}{\omega _z})^2] - 10\log[1+\frac{\omega^4}{\omega_n^4}+\frac{2\omega^2(2\zeta ^2 -1)}{\omega_n^2}] \] peaking frequency \(\omega_\text{peak}\) can be obtained via \(\frac{d A_\text{dB}(\omega)}{d\omega} = 0\) \[ \omega_\text{peak} = \omega_z \sqrt{\sqrt{(\frac{\omega_n}{\omega_z})^4 - 2(\frac{\omega_n}{\omega_z})^2(2\zeta ^2-1)+1} - 1} \]

Settling Time

single-pole

image-20240725204501781

image-20240725204527121 \[ \tau \simeq \frac{1}{\beta \omega_\text{ugb}} \]

tau_1pole.drawio

two poles

Rise Time

Katsuhiko Ogata, Modern Control Engineering Fifth Edition

image-20240721180718116

For underdamped second order systems, the 0% to 100% rise time is normally used

For \(\text{PM}=70^o\)

  • \(\omega_2=3\omega_u\), that is \(\omega_n = 1.7\omega_u\).
  • \(\zeta = 0.87\)

Then \[ t_r = \frac{3.1}{\omega_u} \]

Settling Time

Gene F. Franklin, Feedback Control of Dynamic Systems, 8th Edition

image-20240721181956221

image-20240721182025547

As we know \[ \zeta \omega_n=\frac{1}{2}\sqrt{\frac{\omega_2}{\omega_u}}\cdot \sqrt{\omega_u\omega_2}=\frac{1}{2}\omega_2 \]

Then \[ t_s = \frac{9.2}{\omega_2} \]

For \(\text{PM}=70^o\), \(\omega_2 = 3\omega_u\), that is \[ t_s \simeq \frac{3}{\omega_u} \space\space \text{, for PM}=70^o \]

For \(\text{PM}=45^o\), \(\omega_2 = \omega_u\), that is \[ t_s \simeq \frac{9.2}{\omega_u} \space\space \text{, for PM}=45^o \]

Above equation is valid only for underdamped, \(\zeta=\frac{1}{2}\sqrt{\frac{\omega_2}{\omega_u}}\lt 1\), that is \(\omega_2\lt 4\omega_u\)

reference

Gene F. Franklin, J. David Powell, and Abbas Emami-Naeini. 2018. Feedback Control of Dynamic Systems (8th Edition) (8th. ed.). Pearson.

Katsuhiko Ogata, Modern Control Engineering, 5th edition

Conversion Relationships

Capacitor

image-20231224163730529


image-20240119000951498

image-20240119001025892

image-20240119001309410

Inductor

image-20231224163740411

Derivation

image-20231224163905233

image-20231224163916109

series or parallel representation

reference

Tank Circuits/Impedances [https://stanford.edu/class/ee133/handouts/lecturenotes/lecture5_tank.pdf]

Resonant Circuits [https://web.ece.ucsb.edu/~long/ece145b/Resonators.pdf]

Series & Parallel Impedance Parameters and Equivalent Circuits [https://assets.testequity.com/te1/Documents/pdf/series-parallel-impedance-parameters-an.pdf]

ES Lecture 35: Non ideal capacitor, Capacitor Q and series RC to parallel RC conversion [https://youtu.be/CJ_2U5pEB4o?si=4j4CWsLSapeu-hBo]

Are AC-Driven Circuits Linear?

\[ f(x_1 + x_2)= f(x_1)+ f(x_2) \]

Often, AC-driven circuits can be mistaken as non-linear as the basis that determines the linearity of a circuit is the relationship between the voltage and current.

While an AC signal varies with time, it still exhibits a linear relationship across elements like resistors, capacitors, and inductors. Therefore, AC driven circuits are linear.

Phasor

Phasor concept has no real physical significance. It is just a convenient mathematical tool.

Phasor analysis determines the steady-state response to a linear circuit driven by sinusoidal sources with frequency \(f\)

If your circuit includes transistors or other nonlinear components, all is not lost. There is an extension of phasor analysis to nonlinear circuits called small-signal analysis in which you linearize the components before performing phasor analysis - AC analyses of SPICE

A sinusoid is characterized by 3 numbers, its amplitude, its phase, and its frequency. For example \[ v(t) = A\cos(\omega t + \phi) \tag{1} \] In a circuit there will be many signals but in the case of phasor analysis they will all have the same frequency. For this reason, the signals are characterized using only their amplitude and phase.

The combination of an amplitude and phase to describe a signal is the phasor for that signal.

Thus, the phasor for the signal in \((1)\) is \(A\angle \phi\)

In general, phasors are functions of frequency

Often it is preferable to represent a phasor using complex numbers rather than using amplitude and phase. In this case we represent the signal as: \[ v(t) = \Re\{Ve^{j\omega t} \} \tag{2} \] where \(V=Ae^{j\phi}\) is the phasor.

\((1)\) and \((2)\) are the same

Phasor Model of a Resistor

A linear resistor is defined by the equation \(v = Ri\)

Now, assume that the resistor current is described with the phasor \(I\). Then \[ i(t) = \Re\{Ie^{j\omega t}\} \] \(R\) is a real constant, and so the voltage can be computed to be \[ v(t) = R\Re\{Ie^{j\omega t}\} = \Re\{RIe^{j\omega t}\} = \Re\{Ve^{j\omega t}\} \] where \(V\) is the phasor representation for \(v\), i.e. \[ V = RI \]

  1. Thus, given the phasor for the current we can directly compute the phasor for the voltage across the resistor.

  2. Similarly, given the phasor for the voltage across a resistor we can compute the phasor for the current through the resistor using \(I = \frac{V}{R}\)

Phasor Model of a Capacitor

A linear capacitor is defined by the equation \(i=C\frac{dv}{dt}\)

Now, assume that the voltage across the capacitor is described with the phasor \(V\). Then \[ v(t) = \Re\{ V e^{j\omega t}\} \] \(C\) is a real constant \[ i(t) = C\Re\{\frac{d}{dt}V e^{j\omega t}\} = \Re\{j\omega C V e^{j\omega t}\} \] The phasor representation for \(i\) is \(i(t) = \Re\{Ie^{j\omega t}\}\), that is \(I = j\omega C V\)

  1. Thus, given the phasor for the voltage across a capacitor we can directly compute the phasor for the current through the capacitor.

  2. Similarly, given the phasor for the current through a capacitor we can compute the phasor for the voltage across the capacitor using \(V=\frac{I}{j\omega C}\)

Phasor Model of an Inductor

A linear inductor is defined by the equation \(v=L\frac{di}{dt}\)

Now, assume that the inductor current is described with the phasor \(I\). Then \[ i(t) = \Re\{ I e^{j\omega t}\} \] \(L\) is a real constant, and so the voltage can be computed to be \[ v(t) = L\Re\{\frac{d}{dt}I e^{j\omega t}\} = \Re\{j\omega L I e^{j\omega t}\} \] The phasor representation for \(v\) is \(v(t) = \Re\{Ve^{j\omega t}\}\), that is \(V = j\omega L I\)

  1. Thus, given the phasor for the current we can directly compute the phasor for the voltage across the inductor.

  2. Similarly, given the phasor for the voltage across an inductor we can compute the phasor for the current through the inductor using \(I=\frac{V}{j\omega L}\)

Impedance and Admittance

Impedance and admittance are generalizations of resistance and conductance.

They differ from resistance and conductance in that they are complex and they vary with frequency.

Impedance is defined to be the ratio of the phasor for the voltage across the component and the current through the component: \[ Z = \frac{V}{I} \]

Impedance is a complex value. The real part of the impedance is referred to as the resistance and the imaginary part is referred to as the reactance

For a linear component, admittance is defined to be the ratio of the phasor for the current through the component and the voltage across the component: \[ Y = \frac{I}{V} \]

Admittance is a complex value. The real part of the admittance is referred to as the conductance and the imaginary part is referred to as the susceptance.

Response to Complex Exponentials

The response of an LTI system to a complex exponential input is the same complex exponential with only a change in amplitude

\[\begin{align} y(t) &= H(s)e^{st} \\ H(s) &= \int_{-\infty}^{+\infty}h(\tau)e^{-s\tau}d\tau \end{align}\]

where \(h(t)\) is the impulse response of a continuous-time LTI system

convolution integral is used here

\[\begin{align} y[n] &= H(z)z^n \\ H(z) &= \sum_{k=-\infty}^{+\infty}h[k]z^{-k} \end{align}\]

where \(h(n)\) is the impulse response of a discrete-time LTI system

convolution sum is used here

The signals of the form \(e^{st}\) in continuous time and \(z^{n}\) in discrete time, where \(s\) and \(z\) are complex numbers are referred to as an eigenfunction of the system, and the amplitude factor \(H(s)\), \(H(z)\) is referred to as the system's eigenvalue

Laplace transform

One of the important applications of the Laplace transform is in the analysis and characterization of LTI systems, which stems directly from the convolution property \[ Y(s) = H(s)X(s) \] where \(X(s)\), \(Y(s)\), and \(H(s)\) are the Laplace transforms of the input, output, and impulse response of the system, respectively

From the response of LTI systems to complex exponentials, if the input to an LTI system is \(x(t) = e^{st}\), with \(s\) the ROC of \(H(s)\), then the output will be \(y(t)=H(s)e^{st}\); i.e., \(e^{st}\) is an eigenfunction of the system with eigenvalue equal to the Laplace transform of the impulse response.

s-Domain Element Models

image-20231223225541693

image-20231223225609893

Sinusoidal Steady-State Analysis

Here Sinusoidal means that source excitations have the form \(V_s\cos(\omega t +\theta)\) or \(V_s\sin(\omega t+\theta)\)

Steady state mean that all transient behavior of the stable circuit has died out, i.e., decayed to zero

image-20231223212820547

image-20231223212846596

image-20231223213016508

\(s\)-domain and phasor-domain

Phasor analysis is a technique to find the steady-state response when the system input is a sinusoid. That is, phasor analysis is sinusoidal analysis.

  • Phasor analysis is a powerful technique with which to find the steady-state portion of the complete response.
  • Phasor analysis does not find the transient response.
  • Phasor analysis does not find the complete response.

The beauty of the phasor-domain circuit is that it is described by algebraic KVL and KCL equations with time-invariant sources, not differential equations of time

image-20231224001422189

image-20231223230739219

The difference here is that Laplace analysis can also give us the transient response

image-20231224132406755

General Response Classifications

img

  • zero-input response, zero-state response & complete response

    image-20231223235252850

    The zero-state response is given by \(\mathscr{L^1}[H(s)F(s)]\), for the arbitrary \(s\)-domain input \(F(s)\)

    where \(Z_L(s) = sL\), the inductor with zero initial current \(i_L(0)=0\) and \(Z_C(s)=1/sC\) with zero initial voltage \(v_C(0)=0\)

  • transient response & steady-state response

    image-20231224000454014

  • natural response & forced response

    image-20231224000817438


image-20240118212304219

Transfer Functions and Frequency Response

transfer function

The transfer function \(H(s)\) is the ratio of the Laplace transform of the output of the system to its input assuming all zero initial conditions.

image-20240106185523937

image-20240106185937270

frequency response

An immediate consequence of convolution is that an input of the form \(e^{st}\) results in an output \[ y(t) = H(s)e^{st} \] where the specific constant \(s\) may be complex, expressed as \(s = \sigma + j\omega\)

A very common way to use the exponential response of LTIs is in finding the frequency response i.e. response to a sinusoid

First, we express the sinusoid as a sum of two exponential expressions (Euler’s relation): \[ \cos(\omega t) = \frac{1}{2}(e^{j\omega t}+e^{-j\omega t}) \] If we let \(s=j\omega\), then \(H(-j\omega)=H^*(j\omega)\), in polar form \(H(j\omega)=Me^{j\phi}\) and \(H(-j\omega)=Me^{-j\phi}\). \[\begin{align} y_+(t) & = H(s)e^{st}|_{s=j\omega} = H(j\omega)e^{j\omega t} = M e^{j(\omega t + \phi)} \\ y_-(t) & = H(s)e^{st}|_{s=-j\omega} = H(-j\omega)e^{-j\omega t} = M e^{-j(\omega t + \phi)} \end{align}\]

By superposition, the response to the sum of these two exponentials, which make up the cosine signal, is the sum of the responses \[\begin{align} y(t) &= \frac{1}{2}[H(j\omega)e^{j\omega t} + H(-j\omega)e^{-j\omega t}] \\ &= \frac{M}{2}[e^{j(\omega t + \phi)} + e^{-j(\omega t + \phi)}] \\ &= M\cos(\omega t + \phi) \end{align}\]

where \(M = |H(j\omega|\) and \(\phi = \angle H(j\omega)\)

This means if a system represented by the transfer function \(H(s)\) has a sinusoidal input, the output will be sinusoidal at the same frequency with magnitude \(M\) and will be shifted in phase by the angle \(\phi\)

Laplace transform & Fourier transform

  • Laplace transforms such as \(Y(s)=H(s)U(s)\) can be used to study the complete response characteristics of systems, including the transient response—that is, the time response to an initial condition or suddenly applied signal
  • This is in contrast to the use of Fourier transforms, which only take into account the steady-state response

Given a general linear system with transfer function \(H(s)\) and an input signal \(u(t)\), the procedure for determining \(y(t)\) using the Laplace transform is given by the following steps:

image-20240106224403401

reference

Ken Kundert. Introduction to Phasors. Designer’s Guide Community. September 2011.

How to Perform Linearity Circuit Analysis [https://resources.pcb.cadence.com/blog/2021-how-to-perform-linearity-circuit-analysis]

Stephen P. Boyd. EE102 Lecture 7 Circuit analysis via Laplace transform [https://web.stanford.edu/~boyd/ee102/laplace_ckts.pdf]

Cheng-Kok Koh, EE695K VLSI Interconnect, S-Domain Analysis [https://engineering.purdue.edu/~chengkok/ee695K/lec3c.pdf]

Kenneth R. Demarest, Circuit Analysis using Phasors, Laplace Transforms, and Network Functions [https://people.eecs.ku.edu/~demarest/212/Phasor%20and%20Laplace%20review.pdf]

DeCarlo, R. A., & Lin, P.-M. (2009). Linear circuit analysis : time domain, phasor, and Laplace transform approaches (3rd ed).

Davis, Artice M.. "Linear Circuit Analysis." The Electrical Engineering Handbook - Six Volume Set (1998)

Duane Marcy, Fundamentals of Linear Systems [http://lcs-vc-marcy.syr.edu:8080/Chapter22.html]

Gene F. Franklin, J. David Powell, and Abbas Emami-Naeini. 2018. Feedback Control of Dynamic Systems (8th Edition) (8th. ed.). Pearson.

Data Register, DR:

  • Bypass Register, BR
  • Boundary Scan Register, BSR

Instruction Register, IR

TAP Controller

image-20240113191225838

  • FSM and Shift Register of DR and IR works at the posedge of the clock
  • TMS, TDI, TDO and Hold Register of DR and IR changes value at the negedge of the clock

image-20240113191409296

image-20240113191526490

capture IR 01, the fixed is for easier fault detection

image-20231129232443249

image-20231129233218011

After power-up, they may not be in sync, but there is a trick. Look at the state machine and notice that no matter what state you are, if TMS stays at "1" for five clocks, a TAP controller goes back to the state "Test-Logic Reset". That's used to synchronize the TAP controllers.

It is important to note that in a typical Boundary-Scan test, the time between launching a signal from driver (at the falling edge of test clock (TCK) in the Update-DR or Update-IR TAP Controller state) and capturing that signal (at the rising edge of TCK in the Caputre-DR TAP Controller state) is no less tha 2.5 TCK cycles

Further, the time between successive launches on a driver is governed - not only by the TCk rate - but by the amount of serial data shifting needed to load the next pattern data in the concatenated Boundary-Scan Registers of the Boundary-Scan chain

Thus the effective test data rate of a driver could be thousands of the times lower than the TCK rate

  1. For DC-coupled interconnect, this time is of no concern
  2. For AC-coupled interconnect, the signal may easily decay partially or completely before it can be captured
  3. If only partial decay occurs before capture, that decay will very likely be completed before the driver produces the next edge

AC-coupling

In general, AC-coupling can distort a signal transmitted across a channel depending on its frequency.

Figure 5

  • The high frequency signal is relatively unaffected by the coupling
  • The low frequency signal is severely impacted
    1. it decays to \(V_T\) after a few time constants
    2. its amplitude is double the input amplitude > transient response, before AC-coupling capacitor: \(-A_p \to A_p\); after AC-coupling capacitor \(V_T \to V_T+2A_p\) > A key item to note is that the transitions in the original signal are preserved, although their start and end points are offset > > compared to where they were in the high frequency

Test signal implementation

The test data is either the content of the Boundary-Scan Register Update latch (U) when executing the (DC) EXTEST instruction, or an "AC Signal" when an AC testing instruction is loaded into the device.

The AC signal is a test waveform suited for transmission through AC-coupling

image-20240113184502597

Test signal reception

  • When an AC testing instruction is loaded, a specialized test receiver detects transitoins of the AC signal seen at the input and determines if this represents a logic '0' or '1'
  • When EXTEST is loaded, the input signal level is detected and sent to the output of the test receiver to the Boundary-Scan Register cell

When testing for a shorted capacitor, the test software must ensure that enough time has passed for the signal to decay before entering Capture-DR, either by stopping TCk or by spending additional TCK cycles in the Run-Test/Idle TAP Controller state

EXTEST_PULSE & EXTEST_TRAIN

The two new AC-test instructions provided by this standard differ primarily in the number and timing of transitions to provide flexibility in dealing with the specific dynamic behavior of the channels being tested

AC Test Signal essentially modulates test data so that it will propagate through AC-coupled channels, for devices that contatin AC pins

Tools should use the EXTEST_PULSE instruction unless there is a specific requirement for the EXTEST_TRAIN instruction

EXTEST_PULSE

Generate two additional driver transitions and allows a tester to vary the time between them dependent on how many TCK cycles the TAP is left in the Run-Test/Idle TAP Controller state.

This is intended to allow any undesired transient condition to decay to a DC steady-state value when that will make the final transition more reliably detectable

The duration in the Run-Test/Idle TAP Controller state should be at least three times the high-pass coupling time constant. This allows the first additional transition to decay away to the DC steady-state value for the channel, and ensures that the full amplitude of the final transition is added to or subtracted from that steady-state value

This establishes a known initial condition for the final transition and permits reliable specification of the detection threshold of the test receiver

image-20240113190314947

EXTEST_TRAIN

Generate multiple additional transitions, the number dependent on how long the TAP is left in the Run-Test/Idle TAP Controller state

This is intended to allow any undesired transient condition to decay to an AC steady-state value when that will make the final transition more reliably detectable

image-20240113190345323

IEEE Std 1149.6-2003

This standard is built on top of IEEE Std 1149.1 using the same Test Access Port structure and Boundary-Scan architecture.

  • It adds the concept of a "test receiver" to input pins that are expected to handle differential and/or AC-coupling
  • It adds two new instructions that cause drivers to emit AC waveforms that are processed by test receivers.

JTAG Instruction

Implementation

  • AC mode hysteresis, detect transistion
  • DC mode threshold is determined by jtag initial value

reference

IEEE Std 1149.1-2001, IEEE Standard Test Access Port and Boundary-Scan Architecture, IEEE, 2001

IEEE Std 1149.6-2003, IEEE Standard for BoundaryScan Testing of Advanced Digital Networks, IEEE, 2003

IEEE 1149.6 Tutorial | Testing AC-coupled and Differential High-speed Nets [https://www.asset-intertech.com/resources/eresources/ieee-11496-tutorial-testing-ac-coupled-and-differential-high-speed-nets/]

Prof. James Chien-Mo Li, Lab of Dependable Systems, National Taiwan University. VLSI Testing [http://cc.ee.ntu.edu.tw/~cmli/VLSItesting/]

K.P. Parker, The Boundary Scan Handbook, 3rd ed., Kluwer Academic, 2003.

B. Eklow, K. P. Parker and C. F. Barnhart, "IEEE 1149.6: a boundary-scan standard for advanced digital networks," in IEEE Design & Test of Computers, vol. 20, no. 5, pp. 76-83, Sept.-Oct. 2003, doi: 10.1109/MDT.2003.1232259.

Effective Switching resistance

image-20231114001209252

https://www.eecis.udel.edu/~vsaxena/courses/ece445/s19/Lecture%20Notes/lec15_ece445.pdf

wire delay

The Elmore Delay

image-20230624234813719

image-20230624234940864

image-20230625001756173

Basic idea: use of mean of \(v'(t)\) to approximate median of \(v'(t)\)

image-20230624235148246

image-20230625002239199

Elmore delay approximates the median of \(h(t)\) by the mean of \(h(t)\)

Distributed RC-Line

image-20230624224005736

Lumped approximations

\(rc\)-models

If your simulator does not support a distributed \(rc\)-model, or if the computational complexity of these models slows down your simulation too much, you can construct a simple yet accurate model yourself by approximating the distributed \(rc\) by a lumped RC network with a limited number of elements

image-20230624230057265

The accuracy of the model is determined by the number of stages. For instance, the error of the \(\Pi -3\) model is less than 3%, which is generally sufficient.

Why use "\(\Pi\) Model"

image-20230624230800255

examples

image-20230624224643487

image-20230624224923241

Wire Inductive Effect

  • RC delay increases quadratically with length
  • LC delay (speed of light flight time) increases linearly with length

Inductance will only be important to the delay of low-resistance signals such as wide clock lines

wave

Signal propagates over the wire as a wave (rather than diffusing as in \(rc\) only models)

Signal propagates by alternately transferring energy from capacitive to inductive modes

reference

Akio Kitagawa, Analog layout design https://mixsignal.files.wordpress.com/2013/03/analog-layout.pdf

THE WIRE http://bwrcs.eecs.berkeley.edu/Classes/icdesign/ee141_f01/Notes/chapter4.pdf

Anoop Veliyath, Design Engineer, Cadence Design Systems. Accurately Modeling Transmission Line Behavior with an LC Network-based Approach [pdf]

Mark Horowitz. Lecture 2: Wires and Wire Models [pdf]

Neil Weste and David Harris. 2010. CMOS VLSI Design: A Circuits and Systems Perspective (4th. ed.). Addison-Wesley Publishing Company, USA.

Cheng-Kok Koh. EE695K Modeling and Optimization of High Performance Interconnect [lec3a_pdf]

Vishal Saxena. ECE 445 Intro to VLSI Design: Lectures for Spring 2019 https://www.eecis.udel.edu/~vsaxena/courses/ece445/s19/ECE445.htm

image-20241106231114717

Ensemble average

[https://ece-research.unm.edu/bsanthan/ece541/stat.pdf]

[https://www.nii.ac.jp/qis/first-quantum/e/forStudents/lecture/pdf/noise/chapter1.pdf]

  • Time average: time-averaged quantities for the \(i\)-th member of the ensemble
  • Ensemble average: ensemble-averaged quantities for all members of the ensemble at a certain time

image-20241116113758119

image-20241116215119239

image-20241116215140298

where \(\theta\) is one member of the ensemble; \(p(x)dx\) is the probability that \(x\) is found among \([x, x + dx]\)

autocorrelation, Stationarity & Ergodicity

autocorrelation

image-20241116112504606

The expectation returns the probability-weighted average of the specific function at that specific time over all possible realizations of the process

Stationarity

[https://ece-research.unm.edu/bsanthan/ece541/station.pdf]

image-20241123221623537

image-20240720140527704

Ergodicity

ensemble autocorrelation and temporal autocorrelation (time autocorrelation)

image-20240719230346944

image-20240719210621021


image-20241123004051314

LTI Filtering of WSS process

mean

image-20240917104857284

image-20240917104916086


image-20240827221945277

autocorrelation

deterministic autocorrelation function

image-20240427170024123

\[ R_{yy}(\tau) = h(\tau)*R_{xx}(\tau)*h(-\tau) =R_{xx}(\tau)*h(\tau)*h(-\tau) \]

image-20240907211343832

why \(\overline{R}_{hh}(\tau) \overset{\Delta}{=} h(\tau)*h(-\tau)\) is autocorrelation ? the proof is as follows:

\[\begin{align} \overline{R}_{hh}(\tau) &= h(\tau)*h(-\tau) \\ &= \int_{-\infty}^{\infty}h(x)h(-(\tau - x))dx \\ &= \int_{-\infty}^{\infty}h(x)h(-\tau + x))dx \\ &=\int_{-\infty}^{\infty}h(x+\tau)h(x))dx \end{align}\]


PSD

image-20240827222224395

image-20240827222235906

Topic 6 Random Processes and Signals [https://www.robots.ox.ac.uk/~dwm/Courses/2TF_2021/N6.pdf]

Alan V. Oppenheim, Introduction To Communication, Control, And Signal Processing [https://ocw.mit.edu/courses/6-011-introduction-to-communication-control-and-signal-processing-spring-2010/a6bddaee5966f6e73450e6fe79ab0566_MIT6_011S10_notes.pdf]

Balu Santhanam, Probability Theory & Stochastic Process 2020: LTI Systems and Random Signals [https://ece-research.unm.edu/bsanthan/ece541/LTI.pdf]


Time Reversal \[ x(-t) \overset{FT}{\longrightarrow} X(-j\omega) \]

if \(x(t)\) is real, then \(X(j\omega)\)​ has conjugate symmetry \[ X(-j\omega) = X^*(j\omega) \]

Periodogram

The periodogram is in fact the Fourier transform of the aperiodic correlation of the windowed data sequence

image-20240907215822425

image-20240907215957865

image-20240907230715637

estimating continuous-time stationary random signal

periodogram.drawio

The sequence \(x[n]\) is typically multiplied by a finite-duration window \(w[n]\), since the input to the DFT must be of finite duration. This produces the finite-length sequence \(v[n] = w[n]x[n]\)

image-20240910005608007

image-20240910005927534

image-20240910005723458

\[\begin{align} \hat{P}_{ss}(\Omega) &= \frac{|V(e^{j\omega})|^2}{LU} \\ &= \frac{|V(e^{j\omega})|^2}{\sum_{n=0}^{L-1}(w[n])^2} \tag{1}\\ &= \frac{L|V(e^{j\omega})|^2}{\sum_{k=0}^{L-1}(W[k])^2} \tag{2} \end{align}\]

image-20240910010638376

That is, by \((1)\) \[ \hat{P}_{ss}(\Omega) = T_s\hat{P}_{xx(\omega)} = \frac{T_s|V(e^{j\omega})|^2}{\sum_{n=0}^{L-1}(w[n])^2}=\frac{|V(e^{j\omega})|^2}{f_{res}L\sum_{n=0}^{L-1}(w[n])^2} \]

That is, by \((2)\) \[ \hat{P}_{ss}(\Omega) = T_s\hat{P}_{xx(\omega)} = \frac{T_sL|V(e^{j\omega})|^2}{\sum_{k=0}^{L-1}(W[k])^2} = \frac{|V(e^{j\omega})|^2}{f_{res}\sum_{k=0}^{L-1}(W[k])^2} \]

!! ENBW

Wiener-Khinchin theorem

Norbert Wiener proved this theorem for the case of a deterministic function in 1930; Aleksandr Khinchin later formulated an analogous result for stationary stochastic processes and published that probabilistic analogue in 1934. Albert Einstein explained, without proofs, the idea in a brief two-page memo in 1914

\(x(t)\), Fourier transform over a limited period of time \([-T/2, +T/2]\) , \(X_T(f) = \int_{-T/2}^{T/2}x(t)e^{-j2\pi ft}dt\)

With Parseval's theorem \[ \int_{-T/2}^{T/2}|x(t)|^2dt = \int_{-\infty}^{\infty}|X_T(f)|^2df \] So that \[ \frac{1}{T}\int_{-T/2}^{T/2}|x(t)|^2dt = \int_{-\infty}^{\infty}\frac{1}{T}|X_T(f)|^2df \]

where the quantity, \(\frac{1}{T}|X_T(f)|^2\) can be interpreted as distribution of power in the frequency domain

For each \(f\) this quantity is a random variable, since it is a function of the random process \(x(t)\)

The power spectral density (PSD) \(S_x(f )\) is defined as the limit of the expectation of the expression above, for large \(T\): \[ S_x(f) = \lim _{T\to \infty}\mathrm{E}\left[ \frac{1}{T}|X_T(f)|^2 \right] \]

The Wiener-Khinchin theorem ensures that for well-behaved wide-sense stationary processes the limit exists and is equal to the Fourier transform of the autocorrelation \[\begin{align} S_x(f) &= \int_{-\infty}^{+\infty}R_x(\tau)e^{-j2\pi f \tau}d\tau \\ R_x(\tau) &= \int_{-\infty}^{+\infty}S_x(f)e^{j2\pi f \tau}df \end{align}\]

Note: \(S_x(f)\) in Hz and inverse Fourier Transform in Hz (\(\frac{1}{2\pi}d\omega = df\))

image-20240910003805151

[https://www.robots.ox.ac.uk/~dwm/Courses/2TF_2011/2TF-L5.pdf]


Example

image-20240904203802604

Remember: impulse scaling

image-20240718210137344 \[ \cos(2\pi f_0t) \overset{\mathcal{F}}{\longrightarrow} \frac{1}{2}[\delta(f -f_0)+\delta(f+f_0)] \]

Energy Signal

image-20240910004411501

image-20240910004421791

image-20240910004448439

reference

L.W. Couch, Digital and Analog Communication Systems, 8th Edition, 2013

Alan V Oppenheim, George C. Verghese, Signals, Systems and Inference, 1st edition

R. Ziemer and W. Tranter, Principles of Communications, Seventh Edition, 2013

0%