Another bound on power consumption in DACs

I wanted to go back to the previous post where we investigated the trade-offs between speed, resolution and power consumption in a digital-to-analog converter (DAC). That gave us a bound in that triangle.

What if we use another starting point in our argument rather than noise?

Same type of DAC

Consider the DAC in the figure. Same type of DAC as last time. We have a current through a resistor forming the voltage. What is the minimum possible quanta through the resistor? Leading question … Assume the least significant bit is determined by one single electron during the sampling period.
The average current for a single electron is given by

I = q_0 / \Delta T = q_0 \cdot f_s

where f_s is the sample frequency and q_0 = 0.1602 aC is the elementary charge of the electron. (Let us ignore quantum effects and those things. Let us be traditional and assume we know where the electron is, before we open the box with the cat, …)

Small value?

So, is this a small or a large value?
Well, assume a resistive load of R_L. The (average) voltage for that particular charge, corresponding to one least significant bit, would be

\Delta V = R_L \cdot I = R_L \cdot q_0 \cdot f_s

Assume 100 Ohms in the load, and a sample frequency of 600 MHz. We get

\Delta V = 100 \cdot 0.1602 \cdot 10^{-18} \cdot 600 \cdot 10^6 = 10^{-8}

which is 10 nV. This would correspond to something like:

  • In a 16-bit converter, this would mean that the peak voltage is some 655 uV.
  • In a 20-bit converter, sampled at 1.2 GHz, the peak voltage is some 21 mV.

The voltages can simply not be less than that for the given sample frequencies and resolutions. Otherwise we have to split the electron (buying Swedes a cake).

I think it is actuall rather interesting: the faster you sample, the less electrons will be there for each least significant bit (LSB).

Full-swing signal

With a full-scale sinusoid signal in place, we can find the average power as

P = V_{ref}^2 / R_L/8 = 2^{2 N - 3} \cdot \frac{q_0^2}{T^2} R_L = R_L \cdot 2^{2 N - 3} \cdot q_0^2 \cdot f_s^2

which is then the absolute minimum possible power that must be consumed to obtain a certain resolution.

Are we having fun yet?

Just for the fun of it, let us rewrite the formula a bit

P = C_L R_L \cdot 2^{2 N - 3} \cdot \frac{q_0 }{kT} \cdot \frac{kT} {C_L} \cdot q_0 \cdot f_s^2 = 2^{2 N - 3} \cdot \frac{q_0 }{kT} \cdot \frac{kT} {C_L} \cdot q_0 \cdot f_s / \pi

where we have assumed that we also have to guarantee the bandwidth, not just the sample frequency and voltage levels.
In the equation, we can identify the 26-mV term (q/kT), a noise power kT/C, and some constants. Possibly, it could be related to the previous post. Philosophically (?) one could also think what the thermal noise looks like when we push single electrons back and forth.


How fast do we need to sample over a 100-Ohm load to get a 1-V drop with a single electron? (Once again, from a mathematical point of view, and possibly not the correct physical description of the scenario).

V = R_L \cdot I = R_L \cdot q_0 \cdot f_s = 1

f_s = 1 / (R_L \cdot q_0) = 62 \cdot 10^{15}

Ok, so 60 PHz is rather fast… if we would correlate with light, it would end up in the UV domain, 100 times less than visible light. And now we kind of entering the photoelectric effects, sort of …


5 thoughts on “Another bound on power consumption in DACs

  1. Well, does current in solid state matter really work that way? Is it not better viewed as the displacement of charge carriers than the instantaneous movement of individual charges through the whole circuit? With the displacement model I think it makes sense to measure the effect (voltage) of fractional elementary charges moving through a resistor in a given time.

    Think of it as a sequence of back-to-back marbles being pushed through a rubber tube which is just a little bit smaller than the diameter of the marbles. The tube will hinder (resistance) the marbles (charge carriers) and a certain pressure (voltage) will be required to push the marbles through. If we push with a constant force (constant voltage) and continuously and smoothly add marbles at the input of the rubber tube, the marbles will flow smoothly (constant current) through the tube (resistor), regardless of how rapidly we measure the force being applied (voltage) to push the marbles through the tube.

    The marbles do not quantum jump one at a time through the rubber tube and neither do I think the electrons do through a resistor.

    In reality, the electrons move quickly and randomly in all directions in a conductor at room temperature and any reasonable current we push through it gives just a very tiny bias in the movement in one direction. But I still think the marble analogy holds true and thus that the voltage caused by the displacement of a fraction of an elementary charge per unit time through a resistance in principle is a real phenomenon. In normal temperatures, such weak signals will probably always drown in thermal noise (unless the bandwidth is very small).

    • Yes, nice analogy. For the single-electron “dive” as shown in the picture (with no electrons in the actual resistor), it is quite hard to define a resistance as such. Given the required speed, the physical size of such a resistor would also be quite small in order to be able to assume a lumped element…

      Actually, from the previous post (speed-resolution-power trade-off) I started thinking if there is a bound in the “other end” and started off at some switched capacitor circuits where we would derive the discrete-time operation by setting up the relationships between the sequence of charge redistributions from capacitor to capacitor. In that case, we do consider q = C V and derive the V from the amount of charge(s) traveling to and fro plates.

      Recent reports on unit capacitance in SAR-ADC (with SC-DACs), e.g.,, use 4-fF unit capacitance and a 1-V reference implies something like 4/0.1602 ~ 25000 electrons (give or take). Still sort of countable.

      The constant flow of electrons (well, mean/average flow) is obviously much more complicated to handle.

      Still though, would there be a minimum quanta of the fraction of the quanta?

      • Well, let’s assume we cool our resistor down to 0 K. This gets rid of the thermal noise. Now we only have the voltage caused by our current (or is it the current caused by our voltage?). Can we have arbitrarily small currents and voltages in this scenario? I do not know, but perhaps Heisenberg is there to play tricks with us with his uncertainty principle? If something is moving and you measure its velocity very precisely, you cannot simultaneously know its location arbitrarily well. Maybe this means that there is a limit to how precisely one can know both the current (velocity) and voltage (charge position) at the same time? Does this mean that there is bound to be some minimum amount of noise in either the current and/or the voltage?

        I am skating on thin ice here as I know very little about quantum mechanics…

  2. If it is zero Kelvin it would be pretty thick ice I guess 😉

    Some do put the Heisenberg limit in ADC graphs (e.g. page 14 in Walden’s (the walden-FOM-guy) presentation ) where we see some 18 bit resolution for 10 GSps as a limit.

    [[ This SNR-frequency point – assuming the arguable example in my picture – an LSB current of 38 nA for 1 V over 100 Ohms. With the “electron stage-dive” model we have i = dq/dt => q = i / fs ~ 3.81e-18, i.e., 24 electrons per sample). ]]

    Yes, things does indeed become complicated in nanocosmos…

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s