Difference between revisions of "TF ErrorAna PropOfErr"
Line 201: | Line 201: | ||
Then the probability of one experiment observing the value X_A is given by | Then the probability of one experiment observing the value X_A is given by | ||
− | :<math>P(x=X_A) \propto \frac{e^{-\frac{1}{2} \left ( \frac{ | + | :<math>P(x=X_A) \propto \frac{e^{-\frac{1}{2} \left ( \frac{X_A-X}{\sigma_A}\right )^2}}{\sigma_A}</math> |
Line 207: | Line 207: | ||
− | :<math>P(x=X_B) \propto \frac{e^{-\frac{1}{2} \left ( \frac{ | + | :<math>P(x=X_B) \propto \frac{e^{-\frac{1}{2} \left ( \frac{X_B-X}{\sigma_B}\right )^2}}{\sigma_B}</math> |
Now the combined probability that the first experiment measures the average <math>X_A</math> and the second <math>X_B</math> is given as the product of the two probabilities suth that | Now the combined probability that the first experiment measures the average <math>X_A</math> and the second <math>X_B</math> is given as the product of the two probabilities suth that | ||
− | :<math>P(x=X_A,X_B) \propto \frac{e^{-\frac{1}{2} \left ( \frac{ | + | :<math>P(x=X_A,X_B) \propto \frac{e^{-\frac{1}{2} \left ( \frac{X_A-X}{\sigma_A}\right )^2}}{\sigma_A} \frac{e^{-\frac{1}{2} \left ( \frac{X_B-X}{\sigma_B}\right )^2}}{\sigma_B} = \frac{e^{-\frac{1}{2}\left [ \left ( \frac{X_A-x}{\sigma_A}\right )^2+\left ( \frac{X_B-X}{\sigma_B}\right )^2\right ]}}{\sigma_A \sigma_B}\equiv \frac{e^{-\frac{1}{2}\left [ \chi^2\right ]}}{\sigma_A \sigma_B}</math> |
where | where | ||
− | :<math> \chi^2 \equiv \left ( \frac{ | + | :<math> \chi^2 \equiv \left ( \frac{X_A-X}{\sigma_A}\right )^2+\left ( \frac{X_B-X}{\sigma_B}\right )^2</math> |
Line 223: | Line 223: | ||
Applying this principle to the two experiments means that the best estimate of <math>X</math> is made when <math>P(x=X_A,X_B)</math> is a maximum which occurs when | Applying this principle to the two experiments means that the best estimate of <math>X</math> is made when <math>P(x=X_A,X_B)</math> is a maximum which occurs when | ||
− | :<math> \chi^2 \equiv \left ( \frac{ | + | :<math> \chi^2 \equiv \left ( \frac{X_A-X}{\sigma_A}\right )^2+\left ( \frac{X_B-X}{\sigma_B}\right )^2 = </math>Minimum |
or | or | ||
− | : <math>\frac{\partial \chi^2}{\partial X} = 0</math> | + | : <math>\frac{\partial \chi^2}{\partial X} =2 \left ( \frac{X_A-X}{\sigma_A^2}\right )(-1)+2 \left ( \frac{X_B-X}{\sigma_B^2}\right )(-1)= 0</math> |
+ | :<math>\Rightarrow X = \frac{\frac{X_A}{\sigma_A^2} + \frac{X_B}{\sigma_B^2}}{\frac{1}{\sigma_A^2} + \frac{1}{\sigma_B}}</math> | ||
Revision as of 22:01, 2 March 2010
Taylor Expansion
A quantity which is calculated using quantities with known uncertainties will have an uncertainty based upon the uncertainty of the quantities used in the calculation.
To determine the uncertainty in a quantity which is a function of other quantities, you can consider the dependence of these quantities in terms of a tayler expansion
The Taylor series expansion of a function f(x) about the point a is given as
For small values of x (x << 1) we can expand the function about 0 such that
The taylor expansion of a function with two variables
about the average of the two variables is given by
or
The average
The term
represents a small fluctuation
of the function from its average if we ignore higher order terms in the Taylor expansion ( this means the fluctuations are small)then we can write the variance using the definition aswhere
- Covariance
The above can be reproduced for functions with multiple variables.
Instrumental and Statistical Uncertainties
http://www.physics.uoguelph.ca/~reception/2440/StatsErrorsJuly26-06.pdf
Counting Experiment Example
The table below reports 8 measurements of the coincidence rate observed by two scintillators detecting cosmic rays. The scintillator are place a distance (x) away from each other in order to detect cosmic rays falling on the earth's surface. The time and observed coincidence counts are reported in separate columns as well as the angle made by the normal to the detector with the earths surface.
Date | Time (hrs) | Coincidence Counts | Mean Coinc/Hr | from Mean | ||
---|---|---|---|---|---|---|
9/12/07 | 20.5 | 30 | 2233 | 109 | 10.4 | 1 |
9/14/07 | 21 | 30 | 1582 | 75 | 8.7 | 2 |
10/3/07 | 21 | 30 | 2282 | 100 | 10.4 | 1 |
10/4/07 | 21 | 30 | 2029 | 97 | 9.8 | 0.1 |
10/15/07 | 21 | 30 | 2180 | 100 | 10 | 0.6 |
10/18/07 | 21 | 30 | 2064 | 99 | 9.9 | 0.1 |
10/23/07 | 21 | 30 | 2003 | 95 | 9.8 | 0.2 |
10/26/07 | 21 | 30 | 1943 | 93 | 9.6 | 0.5 |
The average count rate for a given trial is given in the 5th column by diving column 4 by column 2.
One can expect a Poisson parent distribution because the probability of a cosmic ray interacting with the scintillator is low. The variance of measurement in each trial is related to the counting rate by
- average counting rate
as a result of the assumption that the parent distribution is Poisson. The value of this
is shown in column 6.- Is the Poisson distribution the parent distribution in this experiment?
To try and answer the above question lets determine the mean and variance of the data:
If you approximate the Poisson distribution by a Gaussian then the probability any one measurement is within 1 of the mean is 68% = Probability that a measurement of a Gaussian variant will lie within 1 of the mean. For the Poisson distribution with a mean of 97 you would have 66% of the data occur within 1 .
root [26] ROOT::Math::poisson_cdf(97-sqrt(97),97) (double)1.67580969302001004e-01 root [30] 1-2*ROOT::Math::poisson_cdf(97-sqrt(97),97) (const double)6.64838061395997992e-01 root [28] ROOT::Math::normal_cdf(97-sqrt(97),sqrt(97),97) (double)1.58655253931457185e-01 root [29] 1-2*ROOT::Math::normal_cdf(97-sqrt(97),sqrt(97),97) (const double)6.82689492137085630e-01
The 7th column above identifies how many sigma the mean of that trial is from the average
.
Looks like we have 7/8 events within 1
= 87.5%
How about the average sigma assuming poisson?
If you take the average of sigma estimate in column 6 you would get
Using this one can calculate the variance of the variance as
comparing the from the 8 trials to the from the Poisson estimate you have
- In agreement within 2
What is really required however is an estimate of the probability that the assumption of a Poisson distribution is correct (Hypothesis test). This will be the subject of future sections.
Error Propagation
- no Covariances
If
Then
Example: Table Area
A quantity which is calculated using quantities with known uncertainties will have an uncertainty based upon the uncertainty of the quantities used in the calculation.
To determine the uncertainty in a quantity which is a function of other quantities, you can consider the dependence of these quantities in terms of a tayler expansion
Consider a calculation of a Table's Area
The mean that the Area (A) is a function of the Length (L) and the Width (W) of the table.
We can write the variance of the area
where
is defined as the Covariance between and .Weighted Mean and variance
The variance
in the above examples was assumed to be the same for all measurement from the parent distribution.What happens when you wish to combine measurements with unequal variances (different experiments measuring the same quantity)?
Let's assume we have a measured quantity having a mean
from a Gaussian parent distribution.If you attempt to measure X with several different experiments you will likely have a series of results which vary in their precision.
Lets assume you have 2 experiments which obtained the averages
and .
If we assume that each measurement is governed by a Gaussian distribution,
Then the probability of one experiment observing the value X_A is given by
similarly the probability of the other experiment observing the average X_B is
Now the combined probability that the first experiment measures the average
and the second is given as the product of the two probabilities suth thatwhere
- The principle of maximum likelihood (to be the cornerstone of hypothesis testing) may be written as
- The best estimate for the mean and standard deviation of the parent population is obtained when the observed set of values are the most likely to occur;ie: the probability of the observing is a maximum.
Applying this principle to the two experiments means that the best estimate of
is made when is a maximum which occurs when- Minimum
or
If each observable ( ) is accompanied by an estimate of the uncertainty in that observable ( ) then weighted mean is defined as
The variance of the distribution is defined as