Go back

Notes on standard error calculations

Basics

There are various ways to propagate errors. One of them is parametrically. The mean (μ) and variance (Var(x) or σ²) of a distribution (random variable) are variables that describe the function and technically they are the first and second moments respectively. The standard error of the mean is the square root of the variance divided by sample size.
se(x)=\sqrt{\frac{Var(x)}{n}}(A1)
To propagate the errors,two formula were used. The first is the Binaymé rule, which states that the variance of a sum of averages is the sum of the variances of the averages. Second, the variance of a ratio of two averages can be calculated with a formula that is relatively easy to derive.

Derivation of standard error of a ratio

The expected value (E(x)) is the average of a series of values sampled of a random variable (x). The mean is the expected value of the values themselves (eq. B1), while the variance is the expected variable of the squared differences from the mean (eq. B2).
\mu_x=\sum\limits_{i=1}^n\frac{x_i}{n}=E(x)(B1)
Var(x)=\sum\limits_{i=1}^n\frac{(x_i-\mu)^2}{n}=E[(x-\mu)^2](B2)
To proceed we need to determine the variance of a function. The eq. B2 can be easily rewritten as eq. B3 (König–Huygens theorem) thanks to a binomial expansion and the rules of linearity of the expected value (eq. B4 and B5).
Var(x)=E\big ((x-\mu)^2 \big )=E(x^2)-[E(x)]^2(B3)
E(a)=a \textrm{ (where a is a constant) }\therefore E(E(x))=E(x)(B4)
E(x+y)=E(x)+E(y)(B5)
A function at a set point can be approximated to a polynomial, thanks to the Taylor series, which is an infinite sum, but for simplicity only the first two terms are shown in eq. B6, which is what is known as a first-order approximation.
f(x)\approx f(\mu)+f'(\mu)(x-\mu)(B6)
Rewritting eq. B3 for a function and expanding each term as a Taylor series (eq. B6), gives us eq. B8. The first term of eq. B8 is expanded by binomial expansion and then the expected value function is distributed to each member of the sum, whereas in the second term the expected values is distributed on two summands and then the square of the binomial is resolved. It should be noted that by definition the expected value of the (linear) difference from the mean is zero (eq. B9). After that several terms cancel each other out giving the first-order approximation for the variance of a function (eq. B10).
Var(f(x))=E([f(x)]^2 )- [E(f(x)]^2(B7)
Var(f(x))\approx E\Big([f(\mu)+f'(\mu)\cdot (x-\mu)]^2 \Big)- \Big[E\Big(f(\mu)+f'(\mu)\cdot (x-\mu)\Big)\Big]^2(B8)
E(x-\mu)=\mu- \mu=0(B9)
Var(f(x))\approx [f'(x)]^2\cdot Var(x)(B10)
Higher-order approximations can be done (e.g. second order, eq. B11), but not only there is a diminishing return in precision, the complications increase dramatically.
Var(f(x))\approx [f'(x)]^2\cdot Var(x)+f(x)\cdot f''(x)\cdot Var(x)(B11)
Having proved the formula for the variance of a function (eq. B10), we can proceed to determine the variance of a ratio. First, the ratio can be converted into logarithmic form (eq. B12). The variance of a logarithm (eq. B13) can be calculated from the approximation found (eq. B10). This therefore allows the first order Taylor approximation of the variance of a ratio to be found.
ln \bigg(\frac{x}{y}\bigg)=ln(x)-ln(y)(B12)
Var(ln(x))\approx \bigg(\frac{1}{x}\bigg)^2 \cdot Var(x)(B13)
Var\bigg(\frac{x}{y}\bigg)\approx \frac{Var(x)}{\mu_y^2}+\frac{Var(y) \cdot \mu_x^2}{\mu_y^4}(B14)
Author: Matteo P. Ferla Citation: M.P. Ferla, 2016. JS & CSS libraries used: FontAwesome, Google Charts, Google Fonts, Jmat.