Three Ways to Calculate Mean and Variance

Uwaga! Informacje na tej stronie mają ponad 6 lat. Nadal je udostępniam, ale prawdopodobnie nie odzwierciedlają one mojej aktualnej wiedzy ani przekonań.

Mon
20
Jul 2009

There is a free e-book on DSP (Digital Signal Processing) called The Scientist and Engineer's Guide to Digital Signal Processing (by Steven W. Smith, Ph.D.). As I have a holiday now, I've started reading it and I've found some interesting information even in introductory chapters. For example, there are three algorithms to calculate mean and variance. Let's say we have some data in a vector of bytes and we want to calculate its statistics.

std::vector<unsigned char> Bytes;
// Fill Bytes...
uint N = Bytes.size();

Traditional algorithm looks like this:

float Mean = 0.f, Variance = 0.f;
for (uint i = 0; i < N; i++)
  Mean += (float)Bytes[i];
Mean /= (float)N;
for (uint i = 0; i < N; i++)
  Variance += ((float)Bytes[i] - Mean) * ((float)Bytes[i] - Mean);
Variance /= (float)(N-1);

Now the incremental one, which can return current mean and variance at any time while you can also post next piece of data. All we need to implement it is to keep track of current sample number, sum of samples and sum of squared samples. I've created template class for that, which can be parametrized by float, double, int or other numeric types.

template <typename T>
class IncrementalMeanAndVarianceCalc
{
public:
  IncrementalMeanAndVarianceCalc() : m_Sum(T()), m_SqSum(T()), m_Count(0) { }
  void Clear() { m_Sum = m_SqSum = T(); m_Count = 0; }
  
  void Add(const T &v)
  {
    m_Sum += v;
    m_SqSum += v*v;
    m_Count++;
  }
  void Add(const T *values, uint valCount)
  {
    for (uint i = 0; i < valCount; i++)
    {
      m_Sum += values[i];
      m_SqSum += values[i]*values[i];
    }
    m_Count += valCount;
  }
  
  bool IsEmpty() { return m_Count == 0; }
  uint GetCount() { return m_Count; }
  T GetSum() { return m_Sum; }
  T GetSqSum() { return m_SqSum; }
  
  T GetMean()
  {
    assert(!IsEmpty());
    return m_Sum / m_Count;
  }
  T GetVariance(bool varianceBiased = true)
  {
    if (varianceBiased)
      return (m_SqSum - m_Sum*m_Sum/m_Count) / (m_Count-1);
    else
      return (m_SqSum - m_Sum*m_Sum/m_Count) / m_Count;
  }
  void GetMeanAndVariance(T &outMean, T &outVariance, bool varianceBiased = true)
  {
    assert(!IsEmpty());
    outMean = m_Sum / m_Count;
    if (varianceBiased)
      outVariance = (m_SqSum - m_Sum*m_Sum/m_Count) / (m_Count - 1);
    else
      outVariance = (m_SqSum - m_Sum*m_Sum/m_Count) / m_Count;
  }

private:
  T m_Sum, m_SqSum;
  uint m_Count;
};

Finally, there is an algorithm to calculate mean and variance from histogram. It is very efficient method. If we have only 256 possible values for each sample, we don't have to do binning and calculating histogram is very simple:

uint Histogram[256] = { 0 };
for (uint i = 0; i < N; i++)
  Histogram[Bytes[i]]++;

Now, the mean and variance can be calculated this way:

float Mean = 0.f, Variance = 0.f;
for (uint i = 0; i < 256; i++)
  Mean += (float)i * Histogram[i];
Mean /= (float)N;
for (uint i = 0; i < 256; i++)
  Variance += (i - Mean)*(i - Mean) * Histogram[i];
Variance /= N - 1;

Comments | #algorithms #math Share

Comments

[Download] [Dropbox] [pub] [Mirror] [Privacy policy]
Copyright © 2004-2024