Note on the Power-Speed-Resolution trade-off for a DAC

With this post, I just want to compile a future lecture on digital-to-analog converters and the fact that we have a trade-off between speed, resolution and power consumption. What does this trade-off look like in it’s simplest form?

DAC configuration You have a digital-to-analog converter (DAC) with $N$ bits of resolution. The sample (or perhaps more correctly, updating) frequency is $f_{s} = f_{sample}$. The (maximum) output voltage swing is 0 to $V_{ref}$ V. It is a current-steering dumping its output current over a $R_L$ resistance, typically some 50 Ohm or a derivative from that.

Noise

Say that we want to produce a Nyquist converter, and guarantee same bandwidth as the Nyquist range. This implies that the (-3-dB) bandwidth is half the sample frequency. $f_{bw} = f_{sample} / 2$

Now, given the load, the total thermal noise power (over 1 Ohm) from the resistive load is typically given by the $v_n^2 = \frac{kT}{C_L}$

noise, with $C_L$ being the total load capacitance. ( $kT \approx 4 \cdot 10^{-21}$ at room temperature.) The total noise power dissipated in the resistive load is $P_{n} = \frac{kT}{R_L C_L}$

(Yes, the resistance is there again now, don’t always use that kT-over-C-noise thing – depends what you want to compare with…) where, due to the above stated requirements, $\frac{1}{R_L C_L} = 2 \pi f_{bw}$

which implies the “simpler” expression, and arguably more straight-forward, of the total noise power. $P_{n} = {kT}\cdot \pi f_s$

(Notice the single-sided integration of the noise density.)

Signal vs Noise

The maximum signal power that can be delivered to the output is obtained with a full-scale sinusoid. The power becomes $P_{s} = \frac {V_{ref}^2 }{ 8 R_L }$

Further on, a break-point in terms of noise is for that case when the thermal noise power equals the quantization noise power. $P_{n} = {kT}\cdot \pi f_s = P_{q} = \frac{\Delta^2}{12 \cdot R_L } = \frac{ V_{ref}^2 } {12 \cdot 2^{2 N} \cdot R_L }$

We will combine some equations and we land at ${kT}\cdot \pi f_s = \frac{ 8 P_s } {12 \cdot 2^{2 N} }$ ${1.5 \pi kT} \cdot f_s \cdot 2^{2 N} = P_s$

where we can see that $R_L$ disappeared again, phew!

Compiling Notice now that we have a relationship between Power, Speed, and Resolution. One of those FOMs, we’ve seen quite often before: the faster we operate, the less resolution given a certain signal power. By boosting the signal power (i.e., power consumption) we can move that boundary. $P_s \approx 2 \cdot 10^{-20} \cdot f_s \cdot 2^{2 N}$

and if we like, we can express this in a logarithmic scale. $10 \log_{10} P_s \approx 10 \log_{10} 2 \cdot 10^{-20} + 10 \log_{10} f_s + 10 \log_{10} 2^{2 N}$ dB $10 \log_{10} P_s \approx -197 + 10 \log_{10} f_s + 6 N$ dB

Or expressed in dBm (0dBm = 1 mW) $P_s \approx -167 + 10 \log_{10} f_s + 6 N$ dBm

[Caveat: possibly a factor two missing…]

Trying to illustrate this would require some more clever graphs. Instead I’ve chosen to use a spread sheet [click the picture for better resolution] and I have highlighted (red-ish) the cells that require more than 0 dBm of output power to meet the frequency-resolution requirement. For example, a 10-MHz DAC with a 16-bit resolution would be on the safe side of 0 dBm. 17-bits would not. Other color schemes could be added to compare with e.g. WLAN, GSM, GPS, etc., to get a feeling for the levels involved.

Simple maths, but large consequences. So, check your specification… 3 thoughts on “Note on the Power-Speed-Resolution trade-off for a DAC”

1. Is the implication that having more bits in a DAC with a certain sampling rate and maximum output power than the limits derived here is pointless?

What if I want to use my 10 GSPS, 16-bit, 0 dBm DAC to generate a weak narrow-band (say 1 MHz BW) signal around 3 GHz and I use the four LSBs of the DAC for this? There will be a negative number of dBs of SNR in the 5 GHz Nyquist bandwidth, but the SNR will be positive when trying to receive the signal with a radio receiver with 1 MHz BW, so it is not wholly pointless to use the bits that drown in thermal noise when working in a narrow band.

Sure, a trick that would perhaps work as well is to have a DAC with a less frivolous number of bits and add a suitable digital dithering signal to the digital version of the weak desired signal (that of itself falls below the LSB). The truncation (quantization) to the number of bits of the DAC will add white noise to the spectrum, but the weak narrow band signal will still be there and it will be discernible by a narrow band receiver.

Is it always better to use the latter method than to have more bits in the DAC?

2. I guess it is pointless to a certain extent…

The problem with the equations above is that they demonstrate the three main practical contributors (from the DAC’s perspective: maintaining an N-bit over Nyquist as promised datasheet) to the bound by making a couple of assumption:

* the whole Nyquist band (assuming receiver would capture the whole band)
* assuming quantization noise power equal to thermal noise power (which in terms of SNR would actually already mean a loss of 3 dB – half a bit.)

For the communication scenario, with ideal receiver, it would be more appropriate to consider the theoretical limit by using e.g. Channel capacity theorem. A SNR gain (dependent on bandwidth, BER, etc.) should be added to the equations. We would then be able to use the four LSBs in a narrow band, hidden in the overall Nyquist noise (but visible in narrow-band) and let the ideal correlator do its job.

I would think, without digging into those formulas, it would “weaken” the impact of the variable N (number of bits) in the equations and the two cases would be kind of similar, since the target would not be to “guarantee N-bit resolution over Nyquist”.

This site uses Akismet to reduce spam. Learn how your comment data is processed.