It’s a long way to Monte Carlo, it’s a long way to go…

How far is it to Monte Carlo from here?

Lately, there were some questions on Monte Carlo analyses and how many runs we have to run. So, that needed some discussion I thought.
(Thanks to the Siliconeer for some references and numbers).

The problem

Problem is this: you have an electronic circuit and you want to know if it works. Ok, so you would simulate first, then you would tape-out, then you would get the circuit back from the fab. Then you would start to measure and it might not work as you expected. Either complete failure due to some swapped wires or what have you. Or there are more soft errors like mismatch creating skews or gain errors or what have you. That’s an old story which we all recognize.

Typically, you would verify as much as possible before submitting your expensive design. You run all the corners, all the use cases, all the temperatures, all the statistical variations, etc.

But wait… there is the keyword: all. How large is the number “all”?

That is a rethorical questions and we all (pun intended) kind of know that we cannot run all test cases before taping out. It is not possible from a timing point of view. We would miss the famous market window or Ph.D. window or what-have-you window.

Unfortunately, we will see that there is no direct answer, unless we make quite a few assumptions or have a priori knowledge about e.g. standard deviations. Otherwise, it has to be an iterative process.

Statistical analysis

What you would instead do is that you run your statistical analyses picking a set of simulation results and deduce the performance from that. For example, you would run, say 100 transient simulations of your analog-to-digital converter (ADC) and find an effective number of bits (ENOB) of say 6.7. How reliable is this number? Is it 6.7-ish? Or between 6.5 and 6.9, or? In each analysis a random error, or deviation, or mismatch, or variation, is added that changes the outcome slightly. The results are then compiled in an histogram or so.

rect6912

That is where the confidence interval is coming into play. We have to express that in terms of how likely it is that something is between this and that. We should say:

It is 99% likely that the true ENOB is between 6.5 and 6.9.

Which means, that still, there is a small probability that the ENOB is actually outside that range. How do these ranges and values depend on the number of samples in the Monte Carlo analysis?

Crunch some numbers

In some sense, the answer is “the square-root of n”. The more samples you take, the more accurate the prediction becomes, but it only scales with the square root of n. If you need to go from 1-% accuracy to 0.1-% accuracy, you have to increase the number of samples by 100 (approximately).

Let’s see if we can make this statement just a bit more useful as we are more interested in what to write in the Cadence GXL form…

Let us refer to x as the parameter we are looking for in our simulations, we will use m and s for the measured mean and standard deviation, respectively. We will use M and S for the true mean and true standard deviation, respectively. The m and s we calculate with standard methods: m is the total sum divided by number of samples, s is more or less the square root of the average of summed squares of values minus the m.

m = \frac{1}{n} \sum_{i=1}^{n} x_i
s = \sqrt{ \frac{1}{n} \sum_{i=1}^{n} (x_i - m)^2 }

Now, the following holds if we can assume that we have a Gaussian distribution of our x samples (not really the case for low numbers, and it could also dependend on the parameter we measure – it might not end up Gaussian as such. For example, ENOB would never be larger than the NOB, which implies a skew in the distribution).

-s \cdot k_c + m < M < m + s \cdot k_c

where k_c is a confidence parameter and will depend on the number of samples and the percentage of confidence. For example, if we want to state a 99-% confidence, the k_c value will be larger than for e.g. 95%, k_{99} > k_{95}. Also, with more Monte Carlo runs, n getting higher, it is also natural to assume that k_c will decrease.

Similarly, we might also have to estimate the true spread, i.e., standard deviation. The following is true:

k_1 \cdot s < S < s \cdot k_2

The standard deviation has to be bounded by two parameters (also dependent on the confidence and number of samples) times the measured standard deviation, due to the nature of the standard deviation (A sum of Gaussians results in a chi-square distribution). These $k_j$ values will also be functions of the number of samples. The more sample we take, the more we will narrow it down. For example, for n = 300 samples, these values are k_1 = 0.926 and k_2 = 1.087.

Get to the point

So, what does it tell us then? It says that we will eventually have, after millions and millions (an infinite number) of devices have been fabricated, a Gaussian distribution per:

N( M, S)

where we, in advance, do not know more about M and S than what’s stated above, and the degree of uncertainty is given by the number of samples we run and the confidence we chose. This means that the expected distribution could actually be the following (!) in the worst-case (within our level of confidence)

N( -k_c \cdot s + m, k_2 \cdot s)

This is graphically shown below, the real Gaussian might thus move a bit off from measured mean as well as becoming wider (and it could be shifted to the other side, of course).

rect6912

We could for the sake of clarity plot these coefficients as function of number of samples. The below picture shows the curves for increasing number of samples. To get the scales in place, the k1 and k2 coefficients were centered around 1. The curves are for 99-% confidence.

coefficients

Yield

And now, that is not the whole story either. We quite often talk about yield in our circuits. We want to guarantee that a certain number, percentage, of our circuits are within specification, for example p = 99%. Dependent on that measure of quality, the Gaussian blobs have a significant impact too. See the picture below, assume that the minimum and maximum parameters should be guesstimated within confidence and within yield requirements. We have to consider the worst-case Gaussians in both directions, i.e, the Gaussians with the widest standard deviation and the minimum and maximum mean values within degree of confidence. The pink areas are samples that will be discarded as they do not meet the yield. The values can be calculated with the inverse error function (erfinv or so). For example, a yield of 99% would put the xmin and xmax at a distance of 2 sigma away from the mean value. A yield of 68% would put them 1 sigma away.

rect6912

How do we work out some useful numbers for this. We have three components in our analysis, design method: number of samples (n), yield (p), and confidence. Probably, confidence and yield has to be set by your project manager or product manager. That is not much you can do as a designer. However, you should be able to motivate the number of samples before you press the big, red tape-out button, given that you have the minimum and maximum specifications on the parameters, xmin and xmax.

Let us normalize the two distributions and we have the following two requirements (and avoid any discussions on not being in right region, etc., and assuming we have equal amount of failed chips on each side, etc.). For simplicity, assume that the specification is symmetric around a point x_0 and at “distances” of \delta x, e.g., x_{max} = x_0 + \delta x. We get something like this after we have divided both sides with x_0 (assuming it’s not zero):

1 + \frac{\delta x }{ x_0} -\frac{m}{ x_0 }  > ( \text{erfinv}(  \frac{ 1 + p }{ 2} ) \cdot k_2 + k_c )  \cdot \frac{ s}  { x_0 }

Let us now also assume that the specification extreme points are 10% away. We get something like

\frac{ 1.1 -  {m}/{ x_0 }}{s / x_0 } > \Phi (n,c,p)

or

{m}  + \Phi (n,c,p) \cdot {s   } < 1.1 x_0 = x_{max}

if we assume symmetry. If we run the simulations and show the \Phi (n,c,p) as function of number of samples we get the following graph for 99% confidence and 99% yield. It converges to (approximately) 2, due to the yield requirements. For lower number of samples, we have to have a stricter scaling.
coefficients_scale

“Concluding”

Given the formulas above, there is no direct control over the number of samples you can/should run. It depends on the measured mean value, the standard deviation, your specification and yield requirements. Assuming 99-% yield, 99-% confidence, 10-% symmetric specification points, show the required coefficient with which the measured standard deviation has to be scaled with in order to evaluate if enough samples are taken or not.

Way forward

A more accurate approach to this way of thinking would however be to simply run a hypothesis test and see how likely it is that the x is between xmin and xmax and see how it varies with the number of samples, n. I might return to that one day when I’ve refreshed my statistics knowledge. We want to be able to state that, with c-% confidence, that the following is true:

P(x_{min} < x < x_{max}) < p

where p is the yield. What would be the required number of samples to do that prediction?

Further on, given the results above, it would make sense to have an adaptive Monte Carlo analyses. Run a few first, check the results, then continue to run until the above requirement is met.

And of course we have to assume that our models are adding the variations in a sufficiently random and correct way. But that’s another story…

References

Advertisements

One thought on “It’s a long way to Monte Carlo, it’s a long way to go…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s