#### Measurement and Uncertainty

There is always uncertainty in physical measurements.

Let's consider measuring a cup of flour to bake cookies. Simply enough, you scoop a cup of flour using your measuring cup. Now, how much flour do you have? Well, perhaps you didn't level off the flour and so you actually have a little more than one cup. We can reduce the uncertainty on our measurement by leveling the flour off. But even if you could perfectly pack the measuring cup, you still won't have exactly one cup of flour since your measuring cup is not exactly one cup. In practice, you are probably within a spoonful of one cup, which is close enough for most cookies. We consider that spoonful to be the uncertainty of our measurement.

#### Random Variables

We often think of measurements as random variables. Each time we measure, we sample from a normal distribution with a mean equal to the true value of the measurement and a standard deviation equal to the uncertainty of the measurement.

In our baking example, we have a random variable $Flour \sim N(\mu_{Flour}, \sigma_{Flour}^2)$ representing our measurement of flour where $Flour$ is from a normal distribution with mean $\mu_{Flour} = \text{1 cup}$ and standard deviation $\sigma_{Flour} = \text{1 spoonful}$. We can think of the true amount of flour scooped out as an estimate sampled from this normal distribution.

Let's make the cookie dough. We can do this by adding the flour and other ingredients together. For simplicity, let's have one random variable $Other \sim N(\mu_{Other}, \sigma_{Other}^2)$ representing our measurement of all the other ingredients combined. Now, what is the uncertainty on this sum? You might be tempted to just add the uncertainties of the flour and other ingredients $\sigma_{Flour} + \sigma_{Other}$.

This isn't quite correct. Let's think about why. Intuitively, we understand that the uncertainty should be less than two spoonfuls since it is unlikely for both measurements to be on the high end.

We can calculate the uncertainty using a bit of probability. If we form a new random variable for our cookie dough, $Dough = Flour + Other$, then $Dough \sim N(\mu_{Flour} + \mu_{Other}, \sigma_{Flour}^2 + \sigma_{Other}^2)$. Thus, the standard deviation of $Dough$ is actually $\sqrt{\sigma_{Flour}^2 + \sigma_{Other}^2}$. This is called "Adding in Quadrature" and is quite different from our original guess $\sigma_{Flour} + \sigma_{Other}$[1].

#### Derivative Approach

Let's return to our baking example once more. Suppose we have the cookie dough and now need to scoop out 1 spoonful at a time for the cookies. How many cookies will we have? And what is the uncertainty on the number of cookies?

Let's start by creating another random variable $Spoonful \sim N(\mu_{Spoonful}, \sigma_{Spoonful}^2)$ representing a spoonful of cookie dough. Thus, the number of cookies is $Cookies = Dough / Spoonful$. Now, we just need to see how much the uncertainty in $Dough$ affects $Cookies$, how much the uncertainty in $Spoonful$ affects $Cookies$, and then add those quantities in quadrature.

We can calculate how the uncertainty in $Dough$ affects $Cookies$ by calculating $\frac{\partial Cookies}{\partial Dough} \sigma_{Dough}$, the partial derivative of $Cookies$ with respect to $Dough$ multiplied by the uncertainty in $Dough$. This formula makes sense because $\frac{\partial Cookies}{\partial Dough}$ is precisely how much a change in $Dough$ affects $Cookies$. Since we expect $Dough$ to vary by $\sigma_{Dough}$ and we want to know how much $Cookies$ will vary, we multiply them as $\frac{\partial Cookies}{\partial Dough} \sigma_{Dough}$. Similarly, we can calculate $\frac{\partial Cookies}{\partial Spoonful} \sigma_{Spoonful}$. Thus, the uncertainty $\sigma_{Cookies} = \sqrt{(\frac{\partial Cookies}{\partial Dough} \sigma_{Dough})^2 + (\frac{\partial Cookies}{\partial Spoonful} \sigma_{Spoonful})^2}$.

#### Law of Propagation of Uncertainty

Our formulas work for addition and division, but what if the output were a function of multiple variables? What if we needed to use exponents, logarithms, or other functions? Fortunately, the Law of Propagation of Uncertainty[1] accounts for all of this.

For any differentiable function $f(x,y,...)$ where $x,y,...$ are independent and random, the uncertainty is $\delta f(x,y,...)=\sqrt{\left(\frac{\partial f}{\partial x} \delta x \right)^2 + \left(\frac{\partial f}{\partial y} \delta y \right)^2 + ...}$[2].

Let's break this formula apart. First, we typically write $\delta x$ instead of $\sigma_x$ because our uncertainty is not always equal to the standard deviation. Secondly, each $\frac{\partial f}{\partial x} \delta x$ represents how much the input $x$'s uncertainty affects the output. Lastly, we sum them in quadrature based upon the idea of summing normal distributions together.

Want to derive the uncertainty equation for other operators and functions? Check out Proofs.

#### Fractional Uncertainty

Fractional uncertainty is the uncertainty of an estimate divided by the absolute value of the estimate itself, $\frac{\delta x}{|x|}$. We use fractional uncertainties because they help represent the percent of uncertainty in our estimate. For example, the fractional uncertainty of $2.00±0.05$ is $0.05/2.00$ and thus the percentage uncertainty is 2.5%.

When first learning to propagate uncertainty, many students are taught to simply add fractional uncertainties in the case of multiplication, $\frac{\delta (x*y)}{(|x*y|)} \approx \frac{\delta x}{|x|} + \frac{\delta y}{|y|}$. This is an approximation that is quick to compute and works in many situations, but is less mathematically rigorous than the true formula, $\frac{\delta (x*y)}{(|x*y|)}=\sqrt{\left(\frac{\delta x}{|x|}\right)^2 + \left(\frac{\delta y}{|y|}\right)^2}$. Fortunately, fractional uncertainty can be derived from the general formula introduced earlier. Check out the multiplication section of Proofs to see that.

#### Significant Figures

Great, now we can propagate uncertainty through any differentiable function. One last question remains: how do we round our measurements? The convention is to round the uncertainty to one significant figure. Then, we round the estimated value to the same place value as the uncertainty[1].

For example, we would round $1.0 ± 0.15$ to $1.0 ± 0.2$ because the uncertainty should have one significant figure. And we would write $2 ± 0.5$ as $2.0 ± 0.5$ because the estimate should be written to the same place value as the uncertainty.

#### References

[1] Evaluation of Measurement Data — Guide to the Expression of Uncertainty In Measurement (GUM). September, 2008.

[2] Taylor, John. An Introduction to Error Analysis - The Study of Uncertainties in Physical Measurements, Second Edition. University Science Books.