|
|
Line 401: |
Line 401: |
| :<math>\beta_k= \sum_i^N y_i \frac{ f_k(x_i)}{\sigma_i^2} = ( \frac{y_1}{\sigma_1^2} f_k(x_1) , \frac{y_2}{\sigma_2^2} f_k(x_2) , \cdots ,\frac{y_N}{\sigma_N^2} f_k(x_N)</math> = a row matrix of order <math>n</math> | | :<math>\beta_k= \sum_i^N y_i \frac{ f_k(x_i)}{\sigma_i^2} = ( \frac{y_1}{\sigma_1^2} f_k(x_1) , \frac{y_2}{\sigma_2^2} f_k(x_2) , \cdots ,\frac{y_N}{\sigma_N^2} f_k(x_N)</math> = a row matrix of order <math>n</math> |
| :<math>a_k =\sum_{j=0}^{n} a_j = ( a_1, a_2, \cdots , a_n)</math> = a row matrix of the parameters | | :<math>a_k =\sum_{j=0}^{n} a_j = ( a_1, a_2, \cdots , a_n)</math> = a row matrix of the parameters |
− | :<math>\alpha_{ni}=sum_i^N \frac{ f_k(x_i)}{\sigma_i^2} f_j(x_i)</math>= | + | :<math>\alpha_{nk}=\sum_i^N \frac{ f_k(x_i)}{\sigma_i^2} f_j(x_i) = \left ( \begin{array}{*3}{c@{\:+\:}}c@{\;=\;}c} |
| + | a_{11} & a_{12} & \cdots & a_{1k} & b_1 \\ |
| + | a_{21} & a_{22} & \cdots & a_{2k} & b_2 \\ |
| + | \multicomumn{5}{c}{\dotfill} |
| + | a_{n1} & a_{n2} & \cdots & a_{nk} & b_n \\ |
| + | \right) </math>= |
| | | |
| | | |
| [http://wiki.iac.isu.edu/index.php/Forest_Error_Analysis_for_the_Physical_Sciences#Statistical_inference Go Back] [[Forest_Error_Analysis_for_the_Physical_Sciences#Statistical_inference]] | | [http://wiki.iac.isu.edu/index.php/Forest_Error_Analysis_for_the_Physical_Sciences#Statistical_inference Go Back] [[Forest_Error_Analysis_for_the_Physical_Sciences#Statistical_inference]] |
Statistical Inference
Frequentist -vs- Bayesian Inference
When it comes to testing a hypothesis, there are two dominant philosophies known as a Frequentist or a Bayesian perspective.
The dominant discussion for this class will be from the Frequentist perspective.
frequentist statistical inference
- Statistical inference is made using a null-hypothesis test; that is, ones that answer the question Assuming that the null hypothesis is true, what is the probability of observing a value for the test statistic that is at least as extreme as the value that was actually observed?
The relative frequency of occurrence of an event, in a number of repetitions of the experiment, is a measure of the probability of that event.
Thus, if nt is the total number of trials and nx is the number of trials where the event x occurred, the probability P(x) of the event occurring will be approximated by the relative frequency as follows:
- [math]P(x) \approx \frac{n_x}{n_t}.[/math]
Bayesian inference.
- Statistical inference is made by using evidence or observations to update or to newly infer the probability that a hypothesis may be true. The name "Bayesian" comes from the frequent use of Bayes' theorem in the inference process.
Bayes' theorem relates the conditional probability|conditional and marginal probability|marginal probabilities of events A and B, where B has a non-vanishing probability:
- [math]P(A|B) = \frac{P(B | A)\, P(A)}{P(B)}\,\! [/math].
Each term in Bayes' theorem has a conventional name:
- P(A) is the prior probability or marginal probability of A. It is "prior" in the sense that it does not take into account any information about B.
- P(B) is the prior or marginal probability of B, and acts as a normalizing constant.
- P(A|B) is the conditional probability of A, given B. It is also called the posterior probability because it is derived from or depends upon the specified value of B.
- P(B|A) is the conditional probability of B given A.
Bayes' theorem in this form gives a mathematical representation of how the conditional probabability of event A given B is related to the converse conditional probabablity of B given A.
Example
Suppose there is a school having 60% boys and 40% girls as students.
The female students wear trousers or skirts in equal numbers; the boys all wear trousers.
An observer sees a (random) student from a distance; all the observer can see is that this student is wearing trousers.
What is the probability this student is a girl?
The correct answer can be computed using Bayes' theorem.
- [math] P(A) \equiv[/math] probability that the student observed is a girl = 0.4
- [math]P(B) \equiv[/math] probability that the student observed is wearing trousers = 60+20/100 = 0.8
- [math]P(B|A) \equiv[/math] probability the student is wearing trousers given that the student is a girl
- [math]P(A|B) \equiv[/math] probability the student is a girl given that the student is wearing trousers
- [math]P(B|A) =0.5[/math]
- [math]P(A|B) = \frac{P(B|A) P(A)}{P(B)} = \frac{0.5 \times 0.4}{0.8} = 0.25.[/math]
Method of Maximum Likelihood
- The principle of maximum likelihood is the cornerstone of Frequentist based hypothesis testing and may be written as
- The best estimate for the mean and standard deviation of the parent population is obtained when the observed set of values are the most likely to occur;ie: the probability of the observing is a maximum.
Least Squares Fit to a Line
Applying the Method of Maximum Likelihood
Our object is to find the best straight line fit for an expected linear relationship between dependent variate [math](y)[/math] and independent variate [math](x)[/math].
If we let [math]y_0(x)[/math] represent the "true" linear relationship between independent variate [math]x[/math] and dependent variate [math]y[/math] such that
- [math]y_o(x) = A + B x[/math]
Then the Probability of observing the value [math]y_i[/math] with a standard deviation [math]\sigma_i[/math] is given by
- [math]P_i = \frac{1}{\sigma \sqrt{2 \pi}} e^{- \frac{1}{2} \left ( \frac{y_i - y_0(x_i)}{\sigma_i}\right)^2}[/math]
assuming an experiment done with sufficiently high statistics that it may be represented by a Gaussian parent distribution.
If you repeat the experiment [math]N[/math] times then the probability of deducing the values [math]A[/math] and [math]B[/math] from the data can be expressed as the joint probability of finding [math]N[/math] [math]y_i[/math] values for each [math]x_i[/math]
- [math]P(A,B) = \Pi \frac{1}{\sigma \sqrt{2 \pi}} e^{- \frac{1}{2} \left ( \frac{y_i - y_0(x_i)}{\sigma_i}\right)^2}[/math]
- [math]= \left ( \frac{1}{\sigma \sqrt{2 \pi}}\right )^N e^{- \frac{1}{2} \sum \left ( \frac{y_i - y_0(x_i)}{\sigma_i}\right)^2}[/math] = Max
The maximum probability will result in the best values for [math]A[/math] and [math]B[/math]
This means
- [math]\chi^2 = \sum \left ( \frac{y_i - y_0(x_i)}{\sigma_i}\right)^2 = \sum \left ( \frac{y_i - A - B x_i }{\sigma_i}\right)^2[/math] = Min
The min for [math]\chi^2[/math] occurs when the function is a minimum for both parameters A & B : ie
- [math]\frac{\partial \chi^2}{\partial A} = \sum \frac{ \partial}{\partial A} \left ( \frac{y_i - A - B x_i }{\sigma_i}\right)^2=0[/math]
- [math]\frac{\partial \chi^2}{\partial B} = \sum \frac{ \partial}{\partial B} \left ( \frac{y_i - A - B x_i }{\sigma_i}\right)^2=0[/math]
- If [math]\sigma_i = \sigma[/math]
- All variances are the same (weighted fits don't make this assumption)
Then
- [math]\frac{\partial \chi^2}{\partial A} = \frac{1}{\sigma^2}\sum \frac{ \partial}{\partial A} \left ( y_i - A - B x_i \right)^2=\frac{-2}{\sigma^2}\sum \left ( y_i - A - B x_i \right)=0[/math]
- [math]\frac{\partial \chi^2}{\partial B} = \frac{1}{\sigma^2}\sum \frac{ \partial}{\partial B} \left ( y_i - A - B x_i \right)^2=\frac{-2}{\sigma^2}\sum x_i \left ( y_i - A - B x_i \right)=0[/math]
or
- [math]\sum \left ( y_i - A - B x_i \right)=0[/math]
- [math]\sum x_i \left( y_i - A - B x_i \right)=0[/math]
The above equations represent a set of simultaneous of 2 equations and 2 unknowns which can be solved.
- [math]\sum y_i = \sum A + B \sum x_i[/math]
- [math]\sum x_i y_i = A \sum x_i + B \sum x_i^2[/math]
- [math]\left( \begin{array}{c} \sum y_i \\ \sum x_i y_i \end{array} \right) = \left( \begin{array}{cc} N & \sum x_i\\
\sum x_i & \sum x_i^2 \end{array} \right)\left( \begin{array}{c} A \\ B \end{array} \right)[/math]
The Method of Determinants
for the matrix problem:
- [math]\left( \begin{array}{c} y_1 \\ y_2 \end{array} \right) = \left( \begin{array}{cc} a_{11} & a_{12}\\ a_{21} & a_{22} \end{array} \right)\left( \begin{array}{c} x_1 \\ x_2 \end{array} \right)[/math]
the above can be written as
- [math]y_1 = a_{11} x_1 + a_{12} x_2[/math]
- [math]y_2 = a_{21} x_1 + a_{22} x_2[/math]
solving for [math]x_1[/math] assuming [math]y_1[/math] is known
- [math]a_{22} (y_1 = a_{11} x_1 + a_{12} x_2)[/math]
- [math]-a_{12} (y_2 = a_{21} x_1 + a_{22} x_2)[/math]
- [math]\Rightarrow a_{22} y_1 - a_{12} y_2 = (a_{11}a_{22} - a_{12}a_{21}) x_1[/math]
- [math]\left| \begin{array}{cc} y_1 & a_{12}\\ y_2 & a_{22} \end{array} \right| = \left| \begin{array}{cc} a_{11} & a_{12}\\ a_{12} & a_{22} \end{array} \right| x_1[/math]
or
- [math]x_1 = \frac{\left| \begin{array}{cc} y_1 & a_{12}\\ y_2 & a_{22} \end{array} \right| }{\left| \begin{array}{cc} a_{11} & a_{12}\\ a_{12} & a_{22} \end{array} \right| }[/math] similarly [math]x_2 = \frac{\left| \begin{array}{cc} y_1 & a_{11}\\ y_2 & a_{21} \end{array} \right| }{\left| \begin{array}{cc} a_{11} & a_{12}\\ a_{12} & a_{22} \end{array} \right| }[/math]
Solutions exist as long as
- [math]\left| \begin{array}{cc} a_{11} & a_{12}\\ a_{12} & a_{22} \end{array} \right| \ne 0[/math]
Apply the method of determinant for the maximum likelihood problem above
- [math]A = \frac{\left| \begin{array}{cc} \sum y_i & \sum x_i\\ \sum x_i y_i & \sum x_i^2 \end{array}\right|}{\left| \begin{array}{cc} N & \sum x_i\\ \sum x_i & \sum x_i^2 \end{array}\right|}[/math]
- [math]B = \frac{\left| \begin{array}{cc} N & \sum y_i\\ \sum x_i & \sum x_i y_i \end{array}\right|}{\left| \begin{array}{cc} N & \sum x_i\\ \sum x_i & \sum x_i^2 \end{array}\right|}[/math]
If the uncertainty in all the measurements is not the same then we need to insert [math]\sigma_i[/math] back into the system of equations.
- [math]A = \frac{\left| \begin{array}{cc} \sum\frac{ y_i}{\sigma_i^2} & \sum\frac{ x_i}{\sigma_i^2}\\ \sum\frac{ x_i y_i}{\sigma_i^2} & \sum\frac{ x_i^2}{\sigma_i^2} \end{array}\right|}{\left| \begin{array}{cc} \sum \frac{1}{\sigma_i^2} & \sum \frac{x_i}{\sigma_i^2}\\ \sum \frac{x_i}{\sigma_i^2} & \sum \frac{x_i^2}{\sigma_i^2} \end{array}\right|} \;\;\;\; B = \frac{\left| \begin{array}{cc} \sum \frac{1}{\sigma_i^2} & \sum \frac{ y_i}{\sigma_i^2}\\ \sum \frac{x_i}{\sigma_i^2} & \sum \frac{x_i y_i}{\sigma_i^2} \end{array}\right|}{\left| \begin{array}{cc} \sum \frac{1}{\sigma_i^2} & \sum \frac{x_i}{\sigma_i^2}\\ \sum \frac{x_i}{\sigma_i^2} & \sum \frac{x_i^2}{\sigma_i^2} \end{array}\right|}[/math]
Uncertainty in the Linear Fit parameters
As always the uncertainty is determined by the Taylor expansion in quadrature such that
- [math]\sigma_P^2 = \sum \left [ \sigma_i^2 \left ( \frac{\partial P}{\partial y_i}\right )^2\right ][/math] = error in parameter P: here covariance has been assumed to be zero
By definition of variance
- [math]\sigma_i^2 \approx s^2 = \frac{\sum \left( y_i - A - B x_i \right)^2}{N -2}[/math] : there are 2 parameters and N data points which translate to (N-2) degrees of freedom.
The least square fit ( assuming equal [math]\sigma[/math]) has the following solution for the parameters A & B as
- [math]A = \frac{\left| \begin{array}{cc} \sum y_i & \sum x_i\\ \sum x_i y_i & \sum x_i^2 \end{array}\right|}{\left| \begin{array}{cc} N & \sum x_i\\ \sum x_i & \sum x_i^2 \end{array}\right|} \;\;\;\; B = \frac{\left| \begin{array}{cc} N & \sum y_i\\ \sum x_i & \sum x_i y_i \end{array}\right|}{\left| \begin{array}{cc} N & \sum x_i\\ \sum x_i & \sum x_i^2 \end{array}\right|}[/math]
uncertainty in A
- [math]\frac{\partial A}{\partial y_j} =\frac{\partial}{\partial y_j} \frac{\sum y_i \sum x_i^2 - \sum x_i \sum x_i y_i }{\left| \begin{array}{cc} N & \sum x_i\\ \sum x_i & \sum x_i^2 \end{array}\right|}[/math]
- [math] = \frac{(1) \sum x_i^2 - x_j\sum x_i }{\left| \begin{array}{cc} N & \sum x_i\\ \sum x_i & \sum x_i^2 \end{array}\right|}[/math] only the [math]y_j[/math] term survives
- [math] = D \left ( \sum x_i^2 - x_j\sum x_i \right)[/math]
Let
- [math]D \equiv \frac{1}{\left| \begin{array}{cc} N & \sum x_i\\ \sum x_i & \sum x_i^2 \end{array}\right|}=\frac{1}{N\sum x_i^2 - \sum x_i \sum x_i }[/math]
- [math]\sigma_A^2 = \sum_{j=1}^N \left [ \sigma_j^2 \left ( \frac{\partial A}{\partial y_j}\right )^2\right ][/math]
- [math]= \sum_{j=1}^N \sigma_j^2 \left ( D \left ( \sum x_i^2 - x_j\sum x_i \right) \right )^2[/math]
- [math] = \sigma^2 D^2 \sum_{j=1}^N \left ( \sum x_i^2 - x_j\sum x_i \right )^2[/math] : Assume [math]\sigma_i = \sigma[/math]
- [math] = \sigma^2 D^2 \sum_{j=1}^N \left ( \sum x_i^2\right )^2 + \left (x_j\sum x_i \right )^2 - 2 \left ( \sum x_i^2 x_j \sum x_i \right )[/math]
- [math] = \sigma^2 D^2\sum x_i^2\left [ \sum_{j=1}^N \left ( \sum x_i^2\right ) + \sum_{j=1}^N x_j^2 - 2 \sum x_i \sum_{j=1}^N x_j \right ][/math]
- [math] = \sigma^2 D^2\sum x_i^2\left [ N \left ( \sum x_i^2\right ) - 2 \sum x_i \sum_{j=1}^N x_j + \sum_{j=1}^N x_j^2 \right ][/math]
- [math] \sum x_i \sum_{j=1}^N x_j \approx \sum_{j=1}^N x_j^2[/math] Both sums are over the number of observations [math]N[/math]
- [math] = \sigma^2 D^2\sum x_i^2\left [ N \left ( \sum x_i^2\right ) - 2 \sum_{j=1}^N x_j^2 + \sum_{j=1}^N x_j^2 \right ][/math]
- [math] = \sigma^2 D^2\sum x_i^2 \frac{1}{D}[/math]
- [math] \sigma_A^2= \sigma^2 \frac{\sum x_i^2 }{N\sum x_i^2 - \left (\sum x_i \right)^2}[/math]
- [math] \sigma_A^2= \frac{\sum \left( y_i - A - B x_i \right)^2}{N -2} \frac{\sum x_i^2 }{N\sum x_i^2 - \left (\sum x_i \right)^2}[/math]
If we redefine our origin in the linear plot so the line is centered a x=0 then
- [math]\sum{x_i} = 0[/math]
- [math]\Rightarrow \frac{\sum x_i^2 }{N\sum x_i^2 - \left (\sum x_i \right)^2} = \frac{\sum x_i^2 }{N\sum x_i^2 } = \frac{1}{N}[/math]
or
- [math] \sigma_A^2= \frac{\sum \left( y_i - A - B x_i \right)^2}{N -2} \frac{1}{N} = \frac{\sigma^2}{N}[/math]
- Note
- The parameter A is the y-intercept so it makes some intuitive sense that the error in the Y -intercept would be dominated by the statistical error in Y
uncertainty in B
- [math]B = \frac{\left| \begin{array}{cc} N & \sum y_i\\ \sum x_i & \sum x_i y_i \end{array}\right|}{\left| \begin{array}{cc} N & \sum x_i\\ \sum x_i & \sum x_i^2 \end{array}\right|}
[/math]
- [math]\sigma_B^2 = \sum_{j=1}^N \left [ \sigma_j^2 \left ( \frac{\partial B}{\partial y_j}\right )^2\right ][/math]
- [math]\frac{\partial B}{\partial y_j} =\frac{\partial}{\partial y_j} \frac{\left| \begin{array}{cc} N & \sum y_i\\ \sum x_i & \sum x_i y_i \end{array}\right|}{\left| \begin{array}{cc} N & \sum x_i\\ \sum x_i & \sum x_i^2 \end{array}\right|}= \frac{\partial}{\partial y_j} D \left ( N\sum x_i y_i -\sum x_i \sum y_i \right )[/math]
- [math]= D \left ( N x_j - \sum x_i \right) [/math]
- [math]\sigma_B^2 = \sum_{j=1}^N \left [ \sigma_j^2 D^2 \left ( N x_j - \sum x_i \right)^2 \right ][/math]
- [math]= \sigma^2 D^2 \sum_{j=1}^N \left [ \left ( N x_j - \sum x_i \right)^2 \right ][/math] assuming [math]\sigma_j = \sigma[/math]
- [math]= \sigma^2 D^2 \sum_{j=1}^N \left [ \left ( N^2 x_j^2 - 2N x_j \sum x_i + \sum x_i^2\right) \right ][/math]
- [math]= \sigma^2 D^2 \left [ \left ( N^2 \sum_{j=1}^Nx_j^2 - 2N \sum x_i \sum_{j=1}^N x_j + \sum x_i^2 \sum_{j=1}^N \right) \right ][/math]
- [math]= \sigma^2 D^2 \left [ \left ( N^2 \sum x_i^2 - 2N \sum x_i \sum_{j=1}^N x_j + N \sum x_i^2 \right) \right ][/math]
- [math]= N \sigma^2 D^2 \left [ \left ( N \sum x_i^2 - 2 \sum x_i \sum_{j=1}^N x_j + \sum x_i^2 \right) \right ][/math]
- [math]= N \sigma^2 D^2 \left [ \left ( N \sum x_i^2 - \sum x_i^2 \right) \right ][/math]
- [math] = N D^2 \sigma^2 \frac{1}{D} = ND \sigma^2[/math]
- [math] \sigma_B^2= \frac{N \frac{\sum \left( y_i - A - B x_i \right)^2}{N -2}} {N\sum x_i^2 - \left (\sum x_i \right)^2}[/math]
Linear Fit with error
From above we know that if each independent measurement has a different error [math]\sigma_i[/math] then the fit parameters are given by
- [math]A = \frac{\left| \begin{array}{cc} \sum\frac{ y_i}{\sigma_i^2} & \sum\frac{ x_i}{\sigma_i^2}\\ \sum\frac{ x_i y_i}{\sigma_i^2} & \sum\frac{ x_i^2}{\sigma_i^2} \end{array}\right|}{\left| \begin{array}{cc} \sum \frac{1}{\sigma_i^2} & \sum \frac{x_i}{\sigma_i^2}\\ \sum \frac{x_i}{\sigma_i^2} & \sum \frac{x_i^2}{\sigma_i^2} \end{array}\right|} \;\;\;\; B = \frac{\left| \begin{array}{cc} \sum \frac{1}{\sigma_i^2} & \sum \frac{ y_i}{\sigma_i^2}\\ \sum \frac{x_i}{\sigma_i^2} & \sum \frac{x_i y_i}{\sigma_i^2} \end{array}\right|}{\left| \begin{array}{cc} \sum \frac{1}{\sigma_i^2} & \sum \frac{x_i}{\sigma_i^2}\\ \sum \frac{x_i}{\sigma_i^2} & \sum \frac{x_i^2}{\sigma_i^2} \end{array}\right|}[/math]
Weighted Error in A
- [math]\sigma_A^2 = \sum_{j=1}^N \left [ \sigma_j^2 \left ( \frac{\partial A}{\partial y_j}\right )^2\right ][/math]
- [math]A = \frac{\left| \begin{array}{cc} \sum\frac{ y_i}{\sigma_i^2} & \sum\frac{ x_i}{\sigma_i^2}\\ \sum\frac{ x_i y_i}{\sigma_i^2} & \sum\frac{ x_i^2}{\sigma_i^2} \end{array}\right|}{\left| \begin{array}{cc} \sum \frac{1}{\sigma_i^2} & \sum \frac{x_i}{\sigma_i^2}\\ \sum \frac{x_i}{\sigma_i^2} & \sum \frac{x_i^2}{\sigma_i^2} \end{array}\right|} [/math]
Let
- [math]D = \frac{1}{\left| \begin{array}{cc} \sum \frac{1}{\sigma_i^2} & \sum \frac{x_i}{\sigma_i^2}\\ \sum \frac{x_i}{\sigma_i^2} & \sum \frac{x_i^2}{\sigma_i^2} \end{array}\right| }=
\frac{1}{\sum \frac{1}{\sigma_i^2} \sum \frac{x_i^2}{\sigma_i^2} - \sum \frac{x_i}{\sigma_i^2} \sum \frac{x_i}{\sigma_i^2}}[/math]
- [math]\frac{\partial A}{\partial y_j} = D\frac{\partial }{\partial y_j} \left [ \sum\frac{ y_i}{\sigma_i^2} \sum\frac{ x_i^2}{\sigma_i^2}- \sum\frac{ x_i}{\sigma_i^2} \sum\frac{ x_i y_i}{\sigma_i^2} \right ][/math]
- [math]= \frac{ D}{\sigma_j^2} \sum\frac{ x_i^2}{\sigma_i^2}- \frac{ x_j}{\sigma_j^2}\sum\frac{ x_i}{\sigma_i^2} [/math]
- [math]\sigma_A^2 = \sum_{j=1}^N \left [ \sigma_j^2 \left ( \frac{\partial A}{\partial y_j}\right )^2\right ] = \sum_{j=1}^N \left [ \sigma_j^2 D^2 \left ( \frac{ 1}{\sigma_j^2} \sum\frac{ x_i^2}{\sigma_i^2}- \frac{ x_j}{\sigma_j^2}\sum\frac{ x_i}{\sigma_i^2} \right )^2\right ][/math]
- [math]= D^2 \sum_{j=1}^N \sigma_j^2 \left [ \frac{ 1}{\sigma_j^4} \left ( \sum\frac{ x_i^2}{\sigma_i^2}\right)^2 - 2 \frac{ x_j}{\sigma_j^4} \sum\frac{ x_i^2}{\sigma_i^2}\sum\frac{ x_i}{\sigma_i^2} + \frac{ x_j^2}{\sigma_j^4} \left (\sum\frac{ x_i}{\sigma_i^2} \right) ^2 \right ][/math]
- [math]= D^2 \left [ \sum_{j=1}^N \frac{ 1}{\sigma_j^2} \left ( \sum\frac{ x_i^2}{\sigma_i^2}\right)^2 - 2 \sum_{j=1}^N \frac{ x_j}{\sigma_j^2} \sum\frac{ x_i^2}{\sigma_i^2}\sum\frac{ x_i}{\sigma_i^2} + \sum_{j=1}^N \frac{ x_j^2}{\sigma_j^2} \left (\sum\frac{ x_i}{\sigma_i^2} \right) ^2 \right ][/math]
- [math]= D^2 \left ( \sum\frac{ x_i^2}{\sigma_i^2}\right) \left [ \sum_{j=1}^N \frac{ 1}{\sigma_j^2} \left ( \sum\frac{ x_i^2}{\sigma_i^2}\right)^2 - 2 \sum_{j=1}^N \frac{ x_j}{\sigma_j^2} \sum\frac{ x_i}{\sigma_i^2} + \sum_{j=1}^N \frac{ x_j^2}{\sigma_j^2} \left (\sum\frac{ 1}{\sigma_i^2} \right) \right ][/math]
- [math]= D^2 \left ( \sum\frac{ x_i^2}{\sigma_i^2}\right) \left [ \sum_{j=1}^N \frac{ 1}{\sigma_j^2} \left ( \sum\frac{ x_i^2}{\sigma_i^2}\right)^2 - \left( \sum \frac{ x_j}{\sigma_j^2} \right)^2 \right ][/math]
- [math]= D \left ( \sum\frac{ x_i^2}{\sigma_i^2}\right) = \frac{ \left ( \sum\frac{ x_i^2}{\sigma_i^2}\right) }{\sum \frac{1}{\sigma_i^2} \sum \frac{x_i^2}{\sigma_i^2} - \sum \frac{x_i}{\sigma_i^2} \sum \frac{x_i}{\sigma_i^2}}[/math]
- Compare with the unweighted error
- [math] \sigma_A^2= \frac{\sum \left( y_i - A - B x_i \right)^2}{N -2} \frac{\sum x_i^2 }{N\sum x_i^2 - \left (\sum x_i \right)^2}[/math]
Weighted Error in B
- [math]\sigma_B^2 = \sum_{j=1}^N \left [ \sigma_j^2 \left ( \frac{\partial B}{\partial y_j}\right )^2\right ][/math]
- [math]B = \frac{\left| \begin{array}{cc} \sum \frac{1}{\sigma_i^2} & \sum \frac{ y_i}{\sigma_i^2}\\ \sum \frac{x_i}{\sigma_i^2} & \sum \frac{x_i y_i}{\sigma_i^2} \end{array}\right|}{\left| \begin{array}{cc} \sum \frac{1}{\sigma_i^2} & \sum \frac{x_i}{\sigma_i^2}\\ \sum \frac{x_i}{\sigma_i^2} & \sum \frac{x_i^2}{\sigma_i^2} \end{array}\right|}[/math]
- [math]\frac{\partial B}{\partial y_j} = D\frac{\partial }{\partial y_j} \left [\sum \frac{1}{\sigma_i^2}\sum \frac{x_i y_i}{\sigma_i^2} - \sum \frac{ y_i}{\sigma_i^2}\sum \frac{x_i}{\sigma_i^2} \right ][/math]
- [math]= \frac{ D}{\sigma_j^2} \sum\frac{ x_i}{\sigma_i^2}- \frac{1}{\sigma_j^2}\sum\frac{ x_i}{\sigma_i^2} [/math]
- [math]\sigma_B^2 = \sum_{j=1}^N \left [ \sigma_j^2 \left ( \frac{\partial B}{\partial y_j}\right )^2\right ][/math]
- [math]= \sum_{j=1}^N \left [ \sigma_j^2 \left ( \frac{ D}{\sigma_j^2} \sum\frac{ x_i}{\sigma_i^2}- \frac{1}{\sigma_j^2}\sum\frac{ x_i}{\sigma_i^2} \right )^2\right ][/math]
- [math]= D^2 \sum_{j=1}^N \left [ \frac{ 1}{\sigma_j^2} \left (\sum\frac{ x_i}{\sigma_i^2}\right)^2 - 2 \left (\sum\frac{x_i}{\sigma_i^2}\right)\frac{1}{\sigma_j^2}\sum\frac{ x_i}{\sigma_i^2} + \left (\frac{1}{\sigma_j}\sum\frac{ x_i}{\sigma_i^2} \right )^2\right ][/math]
- [math]= D^2 \sum_{j=1}^N \frac{ 1}{\sigma_j^2} \left [ \left (\sum\frac{ x_i}{\sigma_i^2}\right)^2 - 2 \left (\sum\frac{x_i}{\sigma_i^2}\right)\sum\frac{ x_i}{\sigma_i^2} + \left (\sum\frac{ x_i}{\sigma_i^2} \right )^2\right ][/math]
- [math]= D^2 \sum_{j=1}^N \frac{ 1}{\sigma_j^2} \left [ \left (\sum\frac{ x_i}{\sigma_i^2}\right)^2 - 1 \left (\sum\frac{x_i}{\sigma_i^2}\right)\sum\frac{ x_i}{\sigma_i^2} \right ][/math]
- [math]= D \sum_{j=1}^N \frac{ 1}{\sigma_j^2}[/math]
- [math]\sigma_B^2 = \frac{ \sum\frac{ 1}{\sigma_i^2}}{\sum \frac{1}{\sigma_i^2} \sum \frac{x_i^2}{\sigma_i^2} - \sum \frac{x_i}{\sigma_i^2} \sum \frac{x_i}{\sigma_i^2}}[/math]
Correlation Probability
Once the Linear Fit has been performed, the next step will be to determine a probability that the Fit is actually describing the data.
The Correlation Probability (R) is one method used to try and determine this probability.
This method evaluates the "slope" parameter to determine if there is a correlation between the dependent and independent variables , x and y.
The liner fit above was done to minimize \chi^2 for the following model
- [math]y = A + Bx[/math]
What if we turn this equation around such that
- [math]x = A^{\prime} + B^{\prime}y[/math]
If there is no correlation between [math]x[/math] and [math]y[/math] then [math]B^{\prime} =0[/math]
If there is complete correlation between [math]x[/math] and [math]y[/math] then
[math]\Rightarrow[/math]
- [math]A = -\frac{A^{\prime}}{B^{\prime}}[/math] and [math]B = \frac{1}{B^{\prime}}[/math]
- and [math]BB^{\prime} = 1[/math]
So one can define a metric BB^{\prime} which has the natural range between 0 and 1 such that
- [math]R \equiv \sqrt{B B^{\prime}}[/math]
since
- [math]B = \frac{\left| \begin{array}{cc} N & \sum y_i\\ \sum x_i & \sum x_i y_i \end{array}\right|}{\left| \begin{array}{cc} N & \sum x_i\\ \sum x_i & \sum x_i^2 \end{array}\right|} = \frac{N\sum x_i y_i - \sum y_i \sum x_i }{N\sum x_i^2 - \sum x_i \sum x_i }[/math]
and one can show that
- [math]B^{\prime} = \frac{\left| \begin{array}{cc} N & \sum x_i\\ \sum y_i & \sum x_i y_i \end{array}\right|}{\left| \begin{array}{cc} N & \sum y_i\\ \sum y_i & \sum y_i^2 \end{array}\right|} = \frac{N\sum x_i y_i - \sum x_i \sum y_i }{N\sum y_i^2 - \sum y_i \sum y_i }[/math]
Thus
- [math]R = \sqrt{ \frac{N\sum x_i y_i - \sum y_i \sum x_i }{N\sum x_i^2 - \sum x_i \sum x_i } \frac{N\sum x_i y_i - \sum x_i \sum y_i }{N\sum y_i^2 - \sum y_i \sum y_i } }[/math]
- [math]= \frac{N\sum x_i y_i - \sum y_i \sum x_i }{\sqrt{\left( N\sum x_i^2 - \sum x_i \sum x_i \right ) \left (N\sum y_i^2 - \sum y_i \sum y_i\right) } }[/math]
- Note
- The correlation coefficient (R) CAN'T be used to indicate the degree of correlation. The probability distribution [math]R[/math] can be derived from a 2-D gaussian but knowledge of the correlation coefficient of the parent population [math](\rho)[/math] is required to evaluate R of the sample distribution.
Instead one assumes a correlation of [math]\rho=0[/math] in the parent distribution and then compares the sample value of [math]R[/math] with what you would get if there were no correlation.
The smaller [math]R[/math] is the more likely that the data are correlated and that the linear fit is correct.
- [math]P_R(R,\nu) = \frac{1}{\sqrt{\pi}} \frac{\Gamma\left ( \frac{\nu+1}{2}\right )}{\Gamma \left ( \frac{\nu}{2}\right)} \left( 1-R^2\right)^{\left( \frac{\nu-2}{2}\right)}[/math]
= Probability that any random sample of UNCORRELATED data would yield the correlation coefficient [math]R[/math]
where
- [math]\Gamma(x) = \int_0^{\infty} t^{x-1}e^{-t} dt[/math]
(ROOT::Math::tgamma(double x) )
- [math]\nu=N-2[/math] = number of degrees of freedom = Number of data points - Number of parameters in fit function
Derived in "Pugh and Winslow, The Analysis of Physical Measurement, Addison-Wesley Publishing, 1966."
Least Squares fit to a Polynomial
Let's assume we wish to now fit a polynomial instead of a straight line to the data.
- [math]y(x) = \sum_{j=0}^{n} a_j x^{n}=\sum_{j=0}^{n} a_j f_j(x)[/math]
- [math]f_j(x) =[/math] a function which does not depend on [math]a_j[/math]
Then the Probability of observing the value [math]y_i[/math] with a standard deviation [math]\sigma_i[/math] is given by
- [math]P_i(a_0,a_1, \cdots ,a_n) = \frac{1}{\sigma_i \sqrt{2 \pi}} e^{- \frac{1}{2} \left ( \frac{y_i - \sum_{j=0}^{n} a_j f_j(x_i)}{\sigma_i}\right)^2}[/math]
assuming an experiment done with sufficiently high statistics that it may be represented by a Gaussian parent distribution.
If you repeat the experiment [math]N[/math] times then the probability of deducing the values [math]a_n[/math] from the data can be expressed as the joint probability of finding [math]N[/math] [math]y_i[/math] values for each [math]x_i[/math]
- [math]P(a_0,a_1, \cdots ,a_n) = \Pi_i P_i(a_0,a_1, \cdots ,a_n) =\Pi_i \frac{1}{\sigma_i \sqrt{2 \pi}} e^{- \frac{1}{2} \left ( \frac{y_i - \sum_{j=0}^{n} a_j f_j(x_i)}{\sigma_i}\right)^2} \propto e^{- \frac{1}{2}\left [ \sum_i^N \left ( \frac{y_i - \sum_{j=0}^{n} a_j f_j(x_i)}{\sigma_i}\right)^2 \right]}[/math]
Once again the probability is maximized when the numerator of the exponential is a minimum
Let
- [math]\chi^2 = \sum_i^N \left ( \frac{y_i - \sum_{j=0}^{n} a_j f_j(x_i)}{\sigma_i}\right )^2[/math]
where [math]N[/math] = number of data points and [math]n[/math] = order of polynomial used to fit the data.
The minimum in [math]\chi^2[/math] is found by setting the partial derivate with respect tot he fit parameters [math]\left (\frac{\partial \chi}{\partial a_k} \right)[/math] to zero
- [math]\frac{\partial \chi^2}{\partial a_k} = \frac{\partial}{\partial a_k}\sum_i^N \frac{1}{\sigma_i^2} \left ( y_i - \sum_{j=0}^{n} a_j f_j(x_i)\right )^2[/math]
- [math]= \sum_i^N 2 \frac{1}{\sigma_i^2} \left ( y_i - \sum_{j=0}^{n} a_j f_j(x_i)\right ) \frac{\partial \left( - \sum_{j=0}^{n} a_j f_j(x_i) \right)}{\partial a_k}[/math]
- [math]= \sum_i^N 2 \frac{1}{\sigma_i^2}\left ( y_i - \sum_{j=0}^{n} a_j f_j(x_i)\right ) \left ( - f_k(x_i) \right)[/math]
- [math]= 2 \sum_i^N \frac{1}{\sigma_i^2}\left ( y_i - \sum_{j=0}^{n} a_j f_j(x_i)\right ) \left ( - f_k(x_i) \right)[/math]
- [math]= 2\sum_i^N \frac{1}{\sigma_i^2} \left ( - f_k(x_i) \right) \left ( y_i - \sum_{j=0}^{n} a_j f_j(x_i)\right ) =0[/math]
- [math]\Rightarrow \sum_i^N y_i \frac{ f_k(x_i)}{\sigma_i^2} = \sum_i^N \frac{ f_k(x_i)}{\sigma_i^2} \sum_{j=0}^{n} a_j f_j(x_i)= \sum_{j=0}^{n} a_j \sum_i^N \frac{ f_k(x_i)}{\sigma_i^2} f_j(x_i)[/math]
You now have a system of [math]n[/math] coupled equations for the parameters [math]a_j[/math] with each equation summing over the [math]N[/math] measurements.
You could use the method of determinants as we did to find the parameters for a linear fit but it is more convenient to uses matrices in a technique referred to as regression analysis
Regression Analysis
The parameters [math]a_j[/math] in the previous section are linear parameters to a general function which may be a polynomial.
The system of equations
- [math]\sum_i^N y_i \frac{ f_k(x_i)}{\sigma_i^2} = \sum_{j=0}^{n} a_j \sum_i^N \frac{ f_k(x_i)}{\sigma_i^2} f_j(x_i)[/math]
may be represented in matrix form as
- [math]\tilde{\beta} = \tilde{a} \tilde{\alpha}[/math]
where
- [math]\beta_k= \sum_i^N y_i \frac{ f_k(x_i)}{\sigma_i^2} = ( \frac{y_1}{\sigma_1^2} f_k(x_1) , \frac{y_2}{\sigma_2^2} f_k(x_2) , \cdots ,\frac{y_N}{\sigma_N^2} f_k(x_N)[/math] = a row matrix of order [math]n[/math]
- [math]a_k =\sum_{j=0}^{n} a_j = ( a_1, a_2, \cdots , a_n)[/math] = a row matrix of the parameters
- [math]\alpha_{nk}=\sum_i^N \frac{ f_k(x_i)}{\sigma_i^2} f_j(x_i) = \left ( \begin{array}{*3}{c@{\:+\:}}c@{\;=\;}c}
a_{11} & a_{12} & \cdots & a_{1k} & b_1 \\
a_{21} & a_{22} & \cdots & a_{2k} & b_2 \\
\multicomumn{5}{c}{\dotfill}
a_{n1} & a_{n2} & \cdots & a_{nk} & b_n \\
\right) [/math]=
Go Back Forest_Error_Analysis_for_the_Physical_Sciences#Statistical_inference