Standard deviation is a statistic parameter that helps to estimate the dispersion of data series. It's usually calculated in two passes: first, you find a mean, and second, you calculate a square deviation of values from the mean:

```
double std_dev1(double a[], int n) {
if(n == 0)
return 0.0;
double sum = 0;
for(int i = 0; i < n; ++i)
sum += a[i];
double mean = sum / n;
double sq_diff_sum = 0;
for(int i = 0; i < n; ++i) {
double diff = a[i] - mean;
sq_diff_sum += diff * diff;
}
double variance = sq_diff_sum / n;
return sqrt(variance);
}
```

But you can do the same thing in one pass. Rewrite the formula in the following way:

```
double std_dev2(double a[], int n) {
if(n == 0)
return 0.0;
double sum = 0;
double sq_sum = 0;
for(int i = 0; i < n; ++i) {
sum += a[i];
sq_sum += a[i] * a[i];
}
double mean = sum / n;
double variance = sq_sum / n - mean * mean;
return sqrt(variance);
}
```

Unfortunately, the result will be inaccurate when the array contains large numbers (see the comments below).

This trick is mentioned in an old Soviet book about programmable calculators (here is the reference for Russian readers: Финк Л. Папа, мама, я и микрокалькулятор. — М.: Радио и связь, 1988).

## 14 comments

Ten recent comments are shown below. Show all comments

Rejoice!

I have just scanned the book in djvu format and and uploaded it to

http://www.arbinada.com/pmk/node/310

It is a great little book. Have fun.

Regards,

Greg W.

http://www.cs.berkeley.edu/~mhoemmen/cs194/Tutorials/variance.pdf suggests a one-pass calculation that avoids many of the round-off errors (I haven't tried it though).

Thank you very much,

Greg. IMHO, it's one of the best Russian books on programming.James, thank you for the link. This method looks interesting, but, if I understand correctly, it requires 2 divisions at every iteration, which is very slow. It's probably makes more sense to use the two-pass calculation. Still thanks for the info.James, thank you, really very good point. Mark Hoemmen's paper has a perfect example to illustrate the danger: take only three numbers 10000, 10001 and 10002, use floats and get the wrong result.

(It's probably not accidental that M. Hoemmen is PhD student on Berkeley, where Prof. W. Kahan is. And I wouldn't be surprised if James is there too. :) )

That simple example shows once again why Prof. W. Kahan designed the x87 FPU with the possibility to use 80 bits for partial results. But Microsoft intentionally designed compilers to ignore that feature (and more).

Well that's a good moment to visit again Prof. Kahan's web page

http://www.cs.berkeley.edu/~wkahan/

and also read, for example

Marketing versus Mathematics

http://www.cs.berkeley.edu/~wkahan/MktgMath.pdf

"Old Kernighan-Ritchie C works better than ANSI C or Java!"

"In 1980 we went to Microsoft to solicit language support for the 8087, for

which a socket was built into the then imminent IBM PC. Bill Gates attended

our meeting for a while and then prophesied that almost none of those sockets

would ever be filled! He departed, leaving a dark cloud over the discussions.

Microsoft’s languages still lack proper support for Intel’s floating-point."

or

http://www.cs.berkeley.edu/~wkahan/Math128/SqSqrts.pdf

"In particular Bill Gates Jr., Microsoft’s language expert, disparaged the extra-wide format in 1982 with consequences that persist today in Microsoft’s languages for the PC. Sun’s Bill Joy did likewise."

or

Floating-Point Arithmetic Besieged by “Business Decisions”

http://www.cs.berkeley.edu/~wkahan/ARITH_17.pdf

The first paper is from 2000 and I don't know if Java now has the extended precision floating point for partial results. Modern CPU instructions "for multimedia" of course don't. Which makes them potentially dangerous when used on formulas like the starting one.

Finally, from:

http://www.cs.berkeley.edu/~wkahan/Mindless.pdf

"Routine use of far more precision than deemed necessary by clever but numerically naive programmers, provided it does not run too slowly, is the best way available"

It does not require two divisions - only one.

Arne, thank you for your algorithm. I've already seen similar algorithms but I don't know how they are derived. I'd greatly appreciate if you would point me to some source which explains derivation of this and/or similar algorithms. According to some experiments I made for some similar calculations the algorithms like the one you presented here give significantly better results compared to the "one pass, full squares" (which can produce even completely wrong results for "inconvenient" input) but they can still be noticeably less accurate than the real two pass algorithm. Do you know of any source where such error analysis is shown? Thank you.

As documented in previous comments, the error can be large for the one-pass method.

But sometimes the input to sqrt can be negative, resulting in... well, who knows what, given that you're coding in C.

Try a list with 3 copies of 1.4592859018312442e+63 for example.

Thank you for the example. There seems to be a problem with very large values and little or no difference between them. std_dev1 works fine, while std_dev2 gives incorrect results.

Knuth actually has one pass algorithm like this

_M gets you mean, _C / (N-1) is variance