Forest ErrAna StatDist
Average and Variance
Average
The word "average" is used to describe a property of a probability distribution or a set of observations/measurements made in an experiment which gives an indication of a likely outcome of an experiment.
The symbol
is usually used to represent the average.
Other notations are
Definition of the average
Here the above average of a parent distribution is defined as being equal to taking an adding up infinite number of observations (x_i) of an observable x and dividing by the observations. This definition uses the assumption the results of experiment asymptotically approach the average of the parent distribution.
Variance
The word "variance" is used to describe a property of a probability distribution or a set of observations/measurements made in an experiment which gives an indication how much an observation will deviate from and average value.
A deviation
of any measurement from a parent distribution with a mean can be defined asthe deviations should average to ZERO for an infinite number of observations by definition of the mean.
Definition of the average
But the AVERAGE DEVIATION is given by an average of the magnitude of the deviations given by
- = a measure of the dispersion of the expected observations about the mean
Taking the absolute value though is cumbersome when performing a statistical analysis so one may express this dispersion in terms of the variance
A typical variable used to denote the variance is
and is defined as
Standard Deviation
The standard deviation is defined as the square root of the variance
- S.D. =
The mean should be thought of as a parameter which characterizes the observations we are making in an experiment. In general the mean specifies the probability distribution that is representative of the observable we are trying to measure through experimentation.
The variance characterizes the uncertainty associated with our experimental attempts to determine the "true" value. Although the mean and true value may not be equal, their difference should be less than the uncertainty given by the governing probability distribution.
Average for an unknown probability distribution (parent population)
If the "Parent Population" is not known, you are just given a list of numbers with no indication of the probability distribution that they were drawn from, then the average and variance may be calculate as shown below.
Arithmetic Mean and variance
If
observables are mode in an experiment then the arithmetic mean of those observables is defined as
The "unbiased" variance of the above sample is defined as
- If you were told that the average is then you can calculate the
"true" variance of the above sample as
Weighted Mean and variance
If each observable (
) is accompanied by an estimate of the uncertainty in that observable ( ) then weighted mean is defined asThe variance of the distribution is defined as
Probability Distributions
Mean and variance
Mean of Discrete Probability Distribution
In the case that you know the probability distribution you can calculate the mean and standard deviation as
For a Discrete probability distribution
where
number of observations
number of different possible observable variables
ith observable quantity
probability of observing = Probability Mass Distribution
for a continuous probability distribution
Expectation Value
The average of a sample drawn from any probability distribution is defined in terms of the expectation value E(x) such that
The expectation value for a discrete probability distribution is given by
The expectation value for a continuous probability distribution is calculated as
Uniform
Binomial Distribution
Binomial random variable describes experiments in which the outcome has only 2 possibilities. The two possible outcomes can be labeled as "success" or "failure". The probabilities may be defined as
- p
- the probability of a success
and
- q
- the probability of a failure.
If we let
represent the number of successes after repeating the experiment timesExperiments with
are also known as Bernoulli trails.Then
is the Binomial random variable with parameters and .The number of ways in which the
successful outcomes can be organized in repeated trials is- where the denotes a factorial such that .
The expression is known as the binomial coefficient and is represented as
The probability of any one ordering of the success and failures is given by
This means the probability of getting exactly k successes after n trials is
It can be shown that the Expectation Value of the distribution is
and the variance is
Examples
The number of times a coin toss is heads.
The probability of a coin landing with the head of the coin facing up is
Suppose you toss a coin 4 times. Here are the possible outcomes
order Number | Trial # | # of Heads | |||
1 | 2 | 3 | 4 | ||
1 | t | t | t | t | 0 |
2 | h | t | t | t | 1 |
3 | t | h | t | t | 1 |
4 | t | t | h | t | 1 |
5 | t | t | t | h | 1 |
6 | h | h | t | t | 2 |
7 | h | t | h | t | 2 |
8 | h | t | t | h | 2 |
9 | t | h | h | t | 2 |
10 | t | h | t | h | 2 |
11 | t | t | h | h | 2 |
12 | t | h | h | h | 3 |
13 | h | t | h | h | 3 |
14 | h | h | t | h | 3 |
15 | h | h | h | t | 3 |
16 | h | h | h | h | 4 |
The probability of order #1 happening is
P( order #1) =
P( order #2) =
The probability of observing the coin land on heads 3 times out of 4 trials is.
Count number of times a 6 is observed when rolling a die
p=1/6
Expectation value :
- The expected (average) value from a single roll of the dice is
Poisson Distribution
where
- = probability for the occurrence of an event per unit interval
- Homework Problem (Bevington pg 38)
- Derive the Poisson distribution assuming a small sample size
1.) Assume that the average rate of an event is constant over a given time interval and that the events are randomly distributed over that time interval.
2.) The probability of NO events occuring over the time interval t is exponential such that
where \tau is a constant of proportionality associated with the mean time
the change in the probability as a function of time is given by