Can we trust the models?

I am preparing this years version of the analog integrated circuit courses (TSTE08 and TSEI12. For this purpose, I need to tweak some of the model cards for the simulator. We do not need the most fancy processes to demonstrate analog circuit design in the courses.

However, while doing this I revisited one old post:

as a way (thanks, Aamir) to plot e.g. the transconductance and output conductance of e.g. a common-source circuit. That is quite powerful in case you want to demonstrate the importance of choosing your operating region or operating point in general. With the above mentioned post, we can plot the parameters, such as the operating region!, as function of e.g. input voltage, etc.

I’ve got a bit confused by the results first after realizing that I was invoking a level-1 model of the MOS in my testbench. Think Shichman-Hodges, hand calculations, if-statements, etc. There are so much material on the properties of the different models and I do not intend to touch upon them here, instead I serve a small comparison…

Consider the testbench below. It is a common-source stage with an NMOS driving transistor with an active, current-mirror load, where we set the current with an ideal current source, ie., forcing current through the drive transistor. We then want to sweep the input transistor DC voltage to find an operating point of interest (or at least get an idea of the operation).

schematic
I will now switch in different models in my model card, and I also append one of those magic extras in an additional file to be able to plot what I want:


save Mdrive:region
save Mdrive:gds
save Mdrive:gm

Level 1

Let us look at the results for different models. First, we start with level 1. One of the most basic models and used for old technologies. We plot the gain (gm/gds), the output voltage (vOut), the transistor’s operating region, the transconductance (gm), and the conductance (gds) as function of the input DC voltage, vInDc.

We can for example see how the transistor sweeps through different operating regions (green, brickwall): cut-off (0) – subthreshold (3) – saturation (2) – linear (1). The transconductance (gm, yellow) follows a more or less linear curve as soon as we enter the saturation region. (Notice that gm = alpha x Veff according to the good-old hand-calculations.

The oddity in this simulation is the gain (blue). Notice that it is plotted in logarithmic scale (!) and for low input DC voltage the gain is huge ~ 30000. We even see that we do have some divide-by-zero happening for low values. Clearly this indicates something unrealistic with the model.

level1

Level 2

Let us see what level 2 can offer. This is the so-called Grove-Frohman model (Google! Interesting guys.) Below we find a similar picture. In this case, the gain looks much more moderate. The treshold voltage is different for this level, which implies a shift of the region towards higher voltages. We see a more soft behavior in the transconductance, but still – at the shift from subthreshold to saturation region (around 0.65 V vInDc) – we see a tendency of a discontinuity. (Notice that we have a finer resolution in vInDc than illustrated by the tick-marks in the graphs).

level2

Level 3

Level 3 – more based on empirical results. Similarly here, we see a discontinuity around the shift from cut-off/subthreshold to the saturation region. Even on the gain curve, we see a clear peaking indicating something strange. It would be more realistic to think of the transition as something continuous. Remember that the blue gain curve is still in a logarithmic scale. The peak hits some 40000 times of gain. The transconductance (yellow) is not all linear as in level 1.

level3

Level 49

Let’s switch to level 49. This is a more modern model and is also called he BSIM3v3 (which comes in different flavours…) Once again the threshold voltage is different. The nice thing here now is that we see a smooth transition between the operating regions – especially from the subthreshold to saturation range. One would think that it is a more accurate model of the real-life transistor, but of course we cannot be all sure.

The gain seems more realistic, no sharp spikes or jumps, and settling towards a final value in a smooth fashion, both for low and high input voltages.

level49

Notice that the threshold voltages are not identical for all the different levels, as well as mobilities, etc., as such, the models are not comparable to – in this case – make a true judgment what’s the most correct model. The same holds for my graphs which have different scales and some with zoom adjusted, etc. Just look for the tendencies.

Different transistors models also try to model different physical phenomena and I will leave it to you to do the research in all books out there.

So, in short – check what you are simulating. Did you switch in correct netlist, correct parameters. Are there discontinuities in the curves? Probably, there shouldn’t be any. Etc.

Noise in an RC integrator

Irfan and I had a discussion on noise in our receiver for body-coupled communication and the fact that noise is built up in the receiver chain.

Just for the sake of argument I calculated the noise in a simple RC integrator to illustrate how you have to select the lower end frequency. Such a block has a DC gain of infinite, which means that low-frequency components are dramatically amplified. The transfer characteristics is 1 / s R C , where s is your frequency variable.

The same holds for 1/f-noise. If you want to find the total noise power of flicker noise, you have to integrate down to 0 Hz which will give you an infinite value. People tend to find this problematic. However, the same actually holds for thermal noise. If you integrate all the way up to infinity you also end up with infinite noise power. Both of them are of course models and in reality there are other things to the models as well as band limitation, etc.

So, just some scribbles with a few tricks associated with them such that I can refer to this page for later use. And I hope I am not wrong now 🙂 please help me to correct the equations then …

Let’s start with an ideal opamp in feedback configuration. Resistor connected between input and virtual ground. Capacitor connected between output and virtual ground.

noise_rcint_1

The noise model of the resistor (thermal noise) is given by an voltage source describing a noisy process with a power spectral density (PSD) of 4 kT R. k is the Boltzmann constant. T is the absolute temperature (in Kelvin). (Notice that it is single-sided noise spectral density.) According to superposition (for linear circuits), we ground the input signal and just look at the contribution from the noise.

noise_rcint_2

According to theory of stochastic processes, we can form the output PSD as the absolute-square transfer function (from noise to output) times the noise PSD. The transfer function from input to output was kind of known from above.

noise_rcint_3

Plugging in the values gives us a rather compact description of the output noise PSD. It is described by an 1/f-square characteristics at the moment.

noise_rcint_4

Let’s integrate that in order to get some idea of the total output-referred noise power (the input-referred noise PSD is already known…). As per Parseval we can find the power of the noise as the integral from some start frequency (which could go down to 0, but let’s wait with that) up to infinity.

noise_rcint_5

By introducing the f0 lower-end frequency and also the time-constant (tau) of the RC integrator, we can further compact the description of the output noise power. We see now that the power is inversely proportional to the RC time constant, to the capacitance, and to the lower-end frequency.

The lower-end frequency can be expressed as the inverse of the maximum expected time of operation (sort-of). Assume you turn your device on for one day. The lowest changing frequency is then 1/86400 Hz.

noise_rcint_6

Now, some “tricks”. The kT factor is found here and there in our designs. At room temperature, this is approximately 4e-21. A handy number to remember. Further on, pi-squared is approximately 10. Eventually, we land at something rather compact again. The longer time our device is on, the more noise power. The higher capacitance the less noise (which is something we also learned in day care centers).

noise_rcint_7

With some example values inserted: assume a time constant of 1 ms and a 1-day operating time, the output noise power can be calculated to have only the capacitance (or resistance) left.

noise_rcint_8

With a 1-microfarad capacitor, the resistance becomes 1 kOhm and the output power is 0.35 microwatt.

noise_rcint_9

noise_rcint_10

What tones to select when testing your DAC?

This post touches upon an older post I did once (cannot find it right now).

In a sampled system, we will “suffer” from folded tones back and forth in our spectrum. For example, if we apply a 3-MHz sinusoid to a digital-to-analog converter (DAC) sampling at 10 MHz, we will get a tone at 3 MHz at the output too (luckily).

However, if there is distortion in the sampling process (notice that it has to be distortion in the components that are sampled/updated at regular basis, i.e., on the sample frequency). If we have distortion of the second and third order, we would expect to see a fundamental tone at 3 MHz and the distortion terms at 6 and 9 MHz. Due to the folding principles (well, essentially Poisson’s formula), we will also see the distortion terms coming in at 4 MHz (the second) and 1 MHz (the third).

This is not necessarily a problem, but if the tones are too close to each other, it might be difficult to isolate them properly with your test script, and if they are strong enough, the might hide some vital information wrt. the system you would like to evaluate (assume you are testing your DAC).

For example, assume you have a signal at 2.5 MHz and distortion up to fifth order, then the fifth harmonic will land right on top of your fundamental. Or why not take the classical example of displaying a fully linear DAC in your test case: use a signal frequency at a quarter of the sample frequency. All distortion terms will end up at DC, half the sample frequency (often omitted due to “clock feedthrough” or similar), and on top of the signal it self. Display the spectrum and you would have an amazingly linear converter…

Anyways, just for some inspiration. Below I show the results when I have simulated the tones that are “acceptable” under the following conditions:

  • The number of bins in your FFT is 2048
  • The order of distortion is (up to) 9
  • The number of Poisson repetitions is 100 (take ten times the order of the distortion for the fun of it).
  • The distance between two terms cannot be less than the number of bins / 32 (a “fixed” frequency makes sense here rather than something depending on the signal frequency.)

The values that are 0 indicate a poorly chosen tone/bin for your signal. So, select one of the non-zero terms and apply your ifft and voila – you have a well-behaved spectrum in which you can determine your SNDR/SFDR/THD a bit more easy.

goodtones

And I am only plotting the values up to half the sample frequency. Anything above would be folded (agtain …)

Silly script: Two-stage OP compensation

As a complement to one of my lectures on compensation of operational amplifiers, I wrote a crude MATLAB example for testing stability and ability to loop for example currents, voltages, etc. In some sense, this could be done using veriloga blocks and slighlty more accurate circuit descriptions. But yet — this is a quite good way of understanding the operation of something as simple as a two-pole system.

I’m referring to a standard two-stage amplifier with a differential-pair in the first stage and a common-source at the output. Attached is also a plot result from octave illustrating the pole/zero placement.

Octave plot of phase and amplitude

Octave plot of phase and amplitude

And you get some results in raw format:

octave:1> antikPoleZero
f_ug = 3.0917e+08
phi_m = 54.465
p_1 = -1.1171e+05
p_2 = -3.2733e+09
p_3 = -7.7695e+12
z_1 = 2.2957e+10

So, just an example/suggestion of simple ways to get some more understanding of the operation of the circuit.


%
% Mainly, this "demo" concentrates on the classical two-stage
% amplifier. This implies also that we have not employed the
% suggested technique by J. Baker, et al.
%
% Some of the design rules:

% Miller

% z_1 approx 10*w_ug => gm_II = 10*gm_I
% p_2 approx 2.2*w_ug => C_C = 0.22 * C_II
% or more generically => C_C = 2.2*C_II/(gm_II/gm_I)

% Nulling resistor:

% And then (nulling resistor option 2, where z_1 -> inf):
% p_2 approx 1.73*w_ug => C_C = 1.73 * C_II / (gm_II/gm_I)
% p_3 > 10 * w_ug
% R_Z = 1/gm_II

% Setting up the frequencies
% No need to touch this
% =========================================================
N = 256;
f = logspace(1, 11, N);
w = 2*pi*f;
s = j*w;

% Some "process"-dependent parameters. Assuming a relatively
% strong channel-length modulation.
lambda = 0.05;

% =========================================================

% =========================================================
% Do your changes here. Why not a for loop on top? You can
% characterize the phase margin as function of tail current,
% or similar.
% =========================================================
% =========================================================
% =========================================================

% Going to a more circuit-level representation:
% Tail current through the differential pair.
I_0 = 100e-6;

% Effective input voltage on diff-pair transistors:
Veff_I = 0.2;

% The second-stage driving capacitor. For sake of argument,
% the effective is slightly higher here.
Veff_II = 0.4;

% The mirror ratio between the output stage and the input
% stage, i.e., the output stage drives K times more current
% than the differential pair.
% Notice that the gm_II = gm_I * K * Veff_I / Veff_II;
K = 20;

% Internal stage capacitance. For sake of modeling, we have
% assumed that the drive transistor in the second stage also
% scales with K (given a constant current and Veff_II).
% C_I is approximately the CGS of the drive transistor.
% Assuming some fF-cap on the gate:

C_I = K*10e-15;

% C_II is the load capacitance.
C_II = 2e-12;

% Some tentative compensation network to start with.
C_C = 0.22 * C_II;
R_Z = 1 ; % AND ALSO SEE BELOW!

% =========================================================
% =========================================================
% =========================================================

% We can now form the different parameters, etc.
% Given the parameters above, calculate the transfer function:
% No need to touch this.
% =============================================================

% First stage:
gm_I = 2*I_0 / Veff_I;
g_I = lambda*I_0;
% Second stage:
gm_II = 2*K*I_0/Veff_II;
g_II = lambda*K*I_0;

A_I = gm_I / g_I;
A_II = gm_II / g_II;

a = A_I * A_II;
b = (C_II + C_C)/g_II + (C_I+C_C)/g_I + a*C_C/gm_I + R_Z*C_C;
c = ((1/(g_I*g_II))*(C_I*C_II + C_C*C_I + C_C*C_II) + ...
R_Z*C_C*(C_I/g_I + C_II/g_II));

d = R_Z*C_I*C_II*C_C/g_I/g_II;

z_1 = 1/(C_C/gm_II - R_Z*C_C);

% =========================================================
% =========================================================
% =========================================================
R_Z = 1 /gm_II;
% =========================================================
% =========================================================
% =========================================================

A_s = a * ( 1 - s /z_1) ./ ...
( 1 + b*s + c*s.^2 + d*s.^3);

% Derive the roots as:
p = roots([1 c/d b/d 1/d ]);

% Just for some pretty-printing.
p_1 = p(3);
p_2 = p(2);
p_3 = p(1);

% Amplitude and phase characteristics
log_A = 20*log10(abs(A_s));
ang_A = 180*unwrap(angle(A_s))/pi;

% Find the unity-gain frequency (in Hz) and phase margin

f_ug = mean(f(find(abs(log_A)==min(abs(log_A)))));
phi_m = mean(ang_A(find(abs(log_A)==min(abs(log_A)))))+180;

% the mean-thing is there to avoid some numerical issues.

fh = figure(1);
subplot(2,1,1);
sh(1) = semilogx(f, log_A);
hold on;
ph(1) = plot(abs(p_1/2/pi), 0, 'x');
ph(2) = plot(abs(p_2/2/pi), 0, 'x');
ph(3) = plot(abs(p_3/2/pi), 0, 'x');
ph(4) = plot(abs(z_1/2/pi), 0, 'o');
lh = line(f_ug*[1 1], [20*log10(abs(A_s(1))) -150]);
tl(1) = title('Amplitude characteristics');
tl(2) = ylabel('|A^2(j \omega )|');
hold off;

subplot(2,1,2);
sh(2) = semilogx(f, ang_A);
hold on;
ph(5) = plot(abs(p_1/2/pi), 0, 'x');
ph(6) = plot(abs(p_2/2/pi), 0, 'x');
ph(7) = plot(abs(p_3/2/pi), 0, 'x');
ph(8) = plot(abs(z_1/2/pi), 0, 'o');
lh = line(f_ug*[1 1], [0 -180]);
tl(3) = ylabel('arg{ A(j \omega )}');
tl(4) = xlabel('Frequency [Hz]');
hold off;

set(tl,'FontSize',18);
set(ph,'LineWidth',4);
set(sh,'LineWidth',4);

% =============================================================
% Dumping some data in raw format

f_ug
phi_m
p_1
p_2
p_3
z_1

% =============================================================

Two-stage OP and macro model

Two-stage OP and macro model