Difference between revisions of "TF ErrorAna PropOfErr"

From New IAC Wiki
Jump to navigation Jump to search
Line 152: Line 152:
 
: <math>\left ( \delta f \right)^2 = \left ( \frac{1}{N} \sigma \right )^2 +  \left ( \frac{1}{N}\sigma \right )^2 + \cdots  \left ( \frac{1}{N}\sigma \right )^2 </math>
 
: <math>\left ( \delta f \right)^2 = \left ( \frac{1}{N} \sigma \right )^2 +  \left ( \frac{1}{N}\sigma \right )^2 + \cdots  \left ( \frac{1}{N}\sigma \right )^2 </math>
 
: <math>=\frac{ \sigma^2}{N}</math>
 
: <math>=\frac{ \sigma^2}{N}</math>
 +
 +
;Does this mean that we get an infinitely precise measurement if <math>N \rightarrow \infty</math>?
 +
: No!  In reality there are systematic errors in every experiment so the best you can do is reduce your statistical precision to a point where the systematic errors dominate.  There is also the observation that in practice it is difficult to find an experiment absent of  "non-statistical fluctuations".
  
 
=Example: Table Area=
 
=Example: Table Area=

Revision as of 22:39, 2 March 2010

Taylor Expansion

A quantity which is calculated using quantities with known uncertainties will have an uncertainty based upon the uncertainty of the quantities used in the calculation.

To determine the uncertainty in a quantity which is a function of other quantities, you can consider the dependence of these quantities in terms of a tayler expansion

The Taylor series expansion of a function f(x) about the point a is given as

[math]f(x) = f(a) + \left . f^{\prime}(x)\right |_{x=a} \frac{x}{1!} + \left . f^{\prime \prime}(x)\right |_{x=a} \frac{x^2}{2!} + ...[/math]

[math]= \left . \sum_{n=0}^{\infty} f^{(n)}(x)\right |_{x=a} \frac{x^n}{n!}[/math]


For small values of x (x << 1) we can expand the function about 0 such that

[math]\sqrt{1+x} = \left . \sqrt{1-0} \frac{1}{2}(1+x)^{-1/2}\right |_{x=0} \frac{x^1}{1!}+ \left . \frac{1}{2}\frac{-1}{2}(1+x)^{-3/2} \right |_{x=0} \frac{x^2}{2!}[/math]

[math]=1 + \frac{x}{2} - \frac{x^2}{8}[/math]


The taylor expansion of a function with two variables[math] (x_1 , x_2)[/math] about the average of the two variables[math] (\bar {x_1} , \bar{x_2} )[/math] is given by

[math]f(x, y)=f(\bar {x}, \bar{x})+(x-\bar {x}) \frac{\partial f}{\partial x}\bigg |_{(x = \bar {x}, y = \bar{y})} +(y-\bar{y}) \frac{\partial f}{\partial y}\bigg |_{(x = \bar {x}, y = \bar{x})}[/math]

or

[math]f(x, y)-f(\bar {x}, \bar{y})=(x-\bar {x}) \frac{\partial f}{\partial x}\bigg |_{(x = \bar {x}, y = \bar{y})} +(y-\bar{y}) \frac{\partial f}{\partial y}\bigg |_{(x = \bar {x}, y = \bar{y})}[/math]


The average

[math]f(\bar {x}, \bar{y}) \equiv \frac{\sum f(x,y)_i}{N}[/math]

The term

[math]\delta f = f(x, y)-f(\bar {x}, \bar{y})[/math]

represents a small fluctuation [math](\delta f)[/math] of the function [math]f[/math] from its average [math]f(\bar {x}, \bar{y})[/math] if we ignore higher order terms in the Taylor expansion ( this means the fluctuations are small)then we can write the variance using the definition as

[math]\sigma^2 = \frac{\sum \left [ f(x,y)_i - f(\bar {x}, \bar{y})\right ]^2}{N}[/math]
[math]= \frac{\sum \left [(x_i-\bar {x}) \frac{\partial f}{\partial x}+(y_i-\bar{y}) \frac{\partial f}{\partial y}\right ]^2}{N}[/math]
[math]= \frac{\sum (x_i-\bar {x})^2 \left ( \frac{\partial f}{\partial x}\right )^2}{N} + \frac{\sum (y_i-\bar {y})^2 \left ( \frac{\partial f}{\partial y}\right )^2}{N} + 2 \frac{\sum (x_i-\bar {x}) \left ( \frac{\partial f}{\partial x} \right ) (y_i-\bar {y}) \left ( \frac{\partial f}{\partial y}\right )}{N} [/math]
[math]\sigma^2 = \sigma_x^2 \left ( \frac{\partial f}{\partial x}\right )^2 + \sigma_y^2\left ( \frac{\partial f}{\partial y}\right )^2 + 2 \sigma_{x,y}^2 \left ( \frac{\partial f}{\partial x} \right ) \left ( \frac{\partial f}{\partial y}\right ) [/math]

where

[math]\sigma_{x,y}^2 = \frac{\sum (x_i-\bar {x}) (y_i-\bar {y}) }{N} \equiv[/math] Covariance


The above can be reproduced for functions with multiple variables.

Instrumental and Statistical Uncertainties

http://www.physics.uoguelph.ca/~reception/2440/StatsErrorsJuly26-06.pdf

Counting Experiment Example

The table below reports 8 measurements of the coincidence rate observed by two scintillators detecting cosmic rays. The scintillator are place a distance (x) away from each other in order to detect cosmic rays falling on the earth's surface. The time and observed coincidence counts are reported in separate columns as well as the angle made by the normal to the detector with the earths surface.

Date Time (hrs) [math]\theta[/math] Coincidence Counts Mean Coinc/Hr [math]\sigma_{Poisson} = \sqrt{\mbox{Mean Counts/Hr}}[/math] [math]\left | \sigma \right |[/math] from Mean
9/12/07 20.5 30 2233 109 10.4 1
9/14/07 21 30 1582 75 8.7 2
10/3/07 21 30 2282 100 10.4 1
10/4/07 21 30 2029 97 9.8 0.1
10/15/07 21 30 2180 100 10 0.6
10/18/07 21 30 2064 99 9.9 0.1
10/23/07 21 30 2003 95 9.8 0.2
10/26/07 21 30 1943 93 9.6 0.5

The average count rate for a given trial is given in the 5th column by diving column 4 by column 2.

One can expect a Poisson parent distribution because the probability of a cosmic ray interacting with the scintillator is low. The variance of measurement in each trial is related to the counting rate by

[math]\sigma^2 = \mu =[/math] average counting rate

as a result of the assumption that the parent distribution is Poisson. The value of this [math]\sigma[/math] is shown in column 6.

Is the Poisson distribution the parent distribution in this experiment?

To try and answer the above question lets determine the mean and variance of the data:

[math]\bar{x} =\frac{\sum CPM_i}{8} = 97.44[/math]
[math]s = \sqrt{\frac{\sum (x_i-\mu)^2}{8-1}} = 10.8[/math]


If you approximate the Poisson distribution by a Gaussian then the probability any one measurement is within 1 [math]\sigma[/math] of the mean is 68% = Probability that a measurement of a Gaussian variant will lie within 1 [math]\sigma[/math] of the mean. For the Poisson distribution with a mean of 97 you would have 66% of the data occur within 1 [math]\sigma = \sqrt{97}[/math].

root [26] ROOT::Math::poisson_cdf(97-sqrt(97),97)
(double)1.67580969302001004e-01
root [30] 1-2*ROOT::Math::poisson_cdf(97-sqrt(97),97)        
(const double)6.64838061395997992e-01

root [28] ROOT::Math::normal_cdf(97-sqrt(97),sqrt(97),97)
(double)1.58655253931457185e-01
root [29] 1-2*ROOT::Math::normal_cdf(97-sqrt(97),sqrt(97),97)
(const double)6.82689492137085630e-01

The 7th column above identifies how many sigma the mean of that trial is from the average [math]\bar{x}[/math].


[math]= 0.68 * 8 = 5[/math]

Looks like we have 7/8 events within 1[math] \sigma[/math] = 87.5%


How about the average sigma assuming poisson?

If you take the average of sigma estimate in column 6 you would get

[math]\frac{\sum \sigma_i(Poisson)}{8} = 9.86[/math]

Using this one can calculate the variance of the variance as

[math]\frac{\sum \left ( \sigma_i(Poisson) - \overline{\sigma(Poisson)}\right)^2}{8-1} = (0.56)^2[/math]


comparing the [math]\sigma[/math] from the 8 trials to the [math]\sigma[/math] from the Poisson estimate you have

[math]10.9 = 9.9 \pm 0.56[/math] In agreement within 2 [math]\sigma[/math]

What is really required however is an estimate of the probability that the assumption of a Poisson distribution is correct (Hypothesis test). This will be the subject of future sections.

Error Propagation

[math]f = \bar{x} = \frac{\sum x_i}{N}[/math]
[math]\frac{\partial f}{\partial x_i} = \frac{1}{N}[/math]
[math]\delta f = \frac{\partial f}{\partial x_1}\sigma_{x_1} + \frac{\partial f}{\partial x_2}\sigma_{x_2} + \cdots \frac{\partial f}{\partial x_n}\sigma_{x_n}[/math]
[math]\left ( \delta f \right)^2 = \left ( \frac{\partial f}{\partial x_1}\sigma_{x_1} + \frac{\partial f}{\partial x_2}\sigma_{x_2} + \cdots \frac{\partial f}{\partial x_n}\sigma_{x_n} \right )^2[/math]
[math]= \left ( \frac{\partial f}{\partial x_1}\sigma_{x_1} \right )^2 + \left ( \frac{\partial f}{\partial x_2}\sigma_{x_2} \right )^2 + \cdots \left ( \frac{\partial f}{\partial x_n}\sigma_{x_n} \right )^2 + 2 \left ( \frac{\partial^2 f}{\partial x_1\partial x_2} \right) \sigma_{x_1}\sigma_{x_2} + \cdots[/math]
[math]\frac{\partial^2 f}{\partial x_i\partial x_j} = \frac{\partial }{\partial x_j} \frac{1}{N} = 0 \Rightarrow[/math] no Covariances
[math]\left ( \delta f \right)^2 = \left ( \frac{\partial f}{\partial x_1}\sigma_{x_1} \right )^2 + \left ( \frac{\partial f}{\partial x_2}\sigma_{x_2} \right )^2 + \cdots \left ( \frac{\partial f}{\partial x_n}\sigma_{x_n} \right )^2 [/math]
[math] = \left ( \frac{1}{N} \sigma_{x_1} \right )^2 + \left ( \frac{1}{N}\sigma_{x_2} \right )^2 + \cdots \left ( \frac{1}{N}\sigma_{x_n} \right )^2 [/math]

If

[math] \sigma_i = \sigma[/math]

Then

[math]\left ( \delta f \right)^2 = \left ( \frac{1}{N} \sigma \right )^2 + \left ( \frac{1}{N}\sigma \right )^2 + \cdots \left ( \frac{1}{N}\sigma \right )^2 [/math]
[math]=\frac{ \sigma^2}{N}[/math]
Does this mean that we get an infinitely precise measurement if [math]N \rightarrow \infty[/math]?
No! In reality there are systematic errors in every experiment so the best you can do is reduce your statistical precision to a point where the systematic errors dominate. There is also the observation that in practice it is difficult to find an experiment absent of "non-statistical fluctuations".

Example: Table Area

A quantity which is calculated using quantities with known uncertainties will have an uncertainty based upon the uncertainty of the quantities used in the calculation.

To determine the uncertainty in a quantity which is a function of other quantities, you can consider the dependence of these quantities in terms of a tayler expansion

Consider a calculation of a Table's Area

[math]A= L \times W[/math]

The mean that the Area (A) is a function of the Length (L) and the Width (W) of the table.

[math]A = f(L,W)[/math]


We can write the variance of the area

[math]\sigma^2_A = \frac{\sum_{i=1}^{i=N} (A_i - \bar{A})^2}{N}[/math]
[math]= \frac{\sum_{i=1}^{i=N} \left [ (L-\bar{L}) \frac{\partial A}{\partial L} \bigg |_{\bar L \bar W} + (W-\bar W) \frac{\partial A}{\partial W} \bigg |_{\bar L \bar WW} \right] ^2}{N}[/math]


[math]= \frac{\sum_{i=1}^{i=N} \left [ (L-\bar{L}) \frac{\partial A}{\partial L} \bigg |_{\bar L \bar W} \right ] ^2}{N} + \frac{\sum_{i=1}^{i=N} \left [ (W-\bar W) \frac{\partial A}{\partial W} \bigg |_{\bar L \bar W} \right] ^2 }{N}[/math]
[math]+2 \frac{\sum_{i=1}^{i=N} \left [ (L-\bar{L}) (W-\bar W) \frac{\partial A}{\partial L} \bigg |_{\bar L \bar W} \frac{\partial A}{\partial W} \bigg |_{\bar L \bar W} \right]^2}{N} [/math]
[math]= \sigma^2_L \left ( \frac{\partial A}{\partial L} \right )^2 +\sigma^2_W \left ( \frac{\partial A}{\partial W} \right )^2 + 2 \sigma^2_{LW} \frac{\partial A}{\partial L} \frac{\partial A}{\partial W} [/math]

where [math]\sigma^2_{LW} = \frac{\sum_{i=1}^{i=N} \left [ (L-\bar{L}) (W-\bar W) \right ]^2}{N}[/math] is defined as the Covariance between [math]L[/math] and [math]W[/math].

Weighted Mean and variance

The variance [math](\sigma)[/math] in the above examples was assumed to be the same for all measurement from the parent distribution.

What happens when you wish to combine measurements with unequal variances (different experiments measuring the same quantity)?

Weighted Mean

Let's assume we have a measured quantity having a mean [math] X[/math] from a Gaussian parent distribution.

If you attempt to measure X with several different experiments you will likely have a series of results which vary in their precision.

Lets assume you have 2 experiments which obtained the averages [math]X_A[/math] and [math]X_B[/math].


If we assume that each measurement is governed by a Gaussian distribution,

Then the probability of one experiment observing the value X_A is given by

[math]P(x=X_A) \propto \frac{e^{-\frac{1}{2} \left ( \frac{X_A-X}{\sigma_A}\right )^2}}{\sigma_A}[/math]


similarly the probability of the other experiment observing the average X_B is


[math]P(x=X_B) \propto \frac{e^{-\frac{1}{2} \left ( \frac{X_B-X}{\sigma_B}\right )^2}}{\sigma_B}[/math]

Now the combined probability that the first experiment measures the average [math]X_A[/math] and the second [math]X_B[/math] is given as the product of the two probabilities suth that

[math]P(x=X_A,X_B) \propto \frac{e^{-\frac{1}{2} \left ( \frac{X_A-X}{\sigma_A}\right )^2}}{\sigma_A} \frac{e^{-\frac{1}{2} \left ( \frac{X_B-X}{\sigma_B}\right )^2}}{\sigma_B} = \frac{e^{-\frac{1}{2}\left [ \left ( \frac{X_A-x}{\sigma_A}\right )^2+\left ( \frac{X_B-X}{\sigma_B}\right )^2\right ]}}{\sigma_A \sigma_B}\equiv \frac{e^{-\frac{1}{2}\left [ \chi^2\right ]}}{\sigma_A \sigma_B}[/math]

where

[math] \chi^2 \equiv \left ( \frac{X_A-X}{\sigma_A}\right )^2+\left ( \frac{X_B-X}{\sigma_B}\right )^2[/math]


The principle of maximum likelihood (to be the cornerstone of hypothesis testing) may be written as
The best estimate for the mean and standard deviation of the parent population is obtained when the observed set of values are the most likely to occur;ie: the probability of the observing is a maximum.

Applying this principle to the two experiments means that the best estimate of [math]X[/math] is made when [math]P(x=X_A,X_B)[/math] is a maximum which occurs when

[math] \chi^2 \equiv \left ( \frac{X_A-X}{\sigma_A}\right )^2+\left ( \frac{X_B-X}{\sigma_B}\right )^2 = [/math]Minimum

or

[math]\frac{\partial \chi^2}{\partial X} =2 \left ( \frac{X_A-X}{\sigma_A^2}\right )(-1)+2 \left ( \frac{X_B-X}{\sigma_B^2}\right )(-1)= 0[/math]
[math]\Rightarrow X = \frac{\frac{X_A}{\sigma_A^2} + \frac{X_B}{\sigma_B^2}}{\frac{1}{\sigma_A^2} + \frac{1}{\sigma_B^2}}[/math]


If each observable ([math]x_i[/math]) is accompanied by an estimate of the uncertainty in that observable ([math]\sigma_i[/math]) then the weighted mean is defined as

[math]\bar{x} = \frac{ \sum_{i=1}^{i=n} \frac{x_i}{\sigma_i}}{\sum_{i=1}^{i=n} \frac{1}{\sigma_i}}[/math]

Weighted Variance

To determine the variance of the measurements you should follow the Taylor seires based prescription denoted above in that

[math]\sigma^2 = \sum \sigma_i^2 \left ( \frac{\partial X}{\partial X_i}\right)^2 = \sigma_A^2\left ( \frac{\partial X}{\partial X_A}\right)^2 + \sigma_B^2\left ( \frac{\partial X}{\partial X_B}\right)^2[/math]
[math]\frac{\partial X}{\partial X_A} = \frac{\partial}{\partial X_A} \frac{\frac{X_A}{\sigma_A^2} + \frac{X_B}{\sigma_B^2}}{\frac{1}{\sigma_A^2} + \frac{1}{\sigma_B^2}} = \frac{\frac{1}{\sigma_A^2}}{\frac{1}{\sigma_A^2} + \frac{1}{\sigma_B^2}} [/math]
[math]\sigma^2 =\sigma_A^2 \left ( \frac{\frac{1}{\sigma_A^2}}{\frac{1}{\sigma_A^2} + \frac{1}{\sigma_B^2}} \right)^2 + \sigma_B^2 \left ( \frac{\frac{1}{\sigma_B^2}}{\frac{1}{\sigma_A^2} + \frac{1}{\sigma_B^2}} \right)^2[/math]
[math]= \frac{\frac{1}{\sigma_A^2}}{(\frac{1}{\sigma_A^2} + \frac{1}{\sigma_B^2})^2} + \frac{\frac{1}{\sigma_B^2}}{(\frac{1}{\sigma_A^2} + \frac{1}{\sigma_B^2})^2}[/math]
[math]= \frac{\frac{1}{\sigma_A^2} + \frac{1}{\sigma_B^2}}{(\frac{1}{\sigma_A^2} + \frac{1}{\sigma_B^2})^2}[/math]
[math]= \frac{1}{(\frac{1}{\sigma_A^2} + \frac{1}{\sigma_B^2})}[/math]

The variance of the distribution is defined as

[math]\frac{1}{\sigma^2} = \sum_{i=1}^{i=n} \frac{1}{\sigma_i^2}[/math] = weighted variance



[1] Forest_Error_Analysis_for_the_Physical_Sciences