Week 2: Random Variables and Central Limit Theorem
Part 1: Motivation - Populations, Samples and Estimates
参考书籍:《Data Analysis for the Life Sciences》
参考视频:
Mathematical Notation
- Summation
n <- 1000
x <- 1:n
S <- sum(x)
- Greek letters
and it is the notation (capital S in Greek):
: The unknown average we want to estimate is often denoted with
, the Greek letter for m (m is for mean).
: The standard deviation is often denoted with
, the Greek letter for s.
: Measurement error or other unexplained random variability is typically denoted with
, the Greek letter for e.
: Effect sizes, for example the effect of a diet on weight, are typically denoted with
.
- Infinity
One way to think about asymptotic results is as results that become better and better as some number increases and we can pick a number so that a computer can’t tell the difference between the approximation and the real number.
onethird <- function(n) sum( 3/10^c(1:n))
1/3 - onethird(4)
# 3.333333e-05
1/3 - onethird(10)
# 3.333334e-11
1/3 - onethird(16)
# 0
In the example above, 16 is practically .
- Integrals
f <- dnorm
x <- seq(-4,4,length=100)
plot(x, f(x), type="l")
x0 <- x[x>2]
y0 <- f(x0)
x0 <- c(min(x0),x0,max(x0))
y0 <- c(0,y0,0)
polygon(x0,y0,col="grey")
width <- 0.0001
x <- seq(2,4,width)
areaofbars <- f(x)*width
sum(areaofbars)
# [1] 0.02272117
The smaller we make width, the closer the sum gets to the integral, which is equal to the area.
Random Variables
- sample
population <- read.csv("femaleControlsPopulation.csv")
##use unlist to turn it into a numeric vector
population <- unlist(population)
control <- sample(population,12)
mean(control)
# [1] 23.47667
control <- sample(population,12)
mean(control)
# [1] 25.12583
control <- sample(population,12)
mean(control)
# [1] 23.72333
Null Distributions
Null hypothesis
: the name “null” is used to remind us that we are acting as skeptics: we give credence to the possibility that there is no difference
文献中的数据
control <- filter(dat,Diet=="chow") %>%
select(Bodyweight) %>%
unlist
treatment <- filter(dat,Diet=="hf") %>%
select(Bodyweight) %>%
unlist
obsdiff <- mean(treatment) - mean(control)
print(obsdiff)
# [1] 3.020833
建立null distribution: Monte Carlo simulation
n <- 10000
null <- vector('numeric', n)
for (i in 1:n){
control <- sample(population,12)
treatment <- sample(population,12)
null[i] <- mean(treatment) - mean(control)
}
hist(null)
mean(null >= obsdiff)
# 等价于
sum(null >= obsdiff) / n
# [1] 0.0123
mean(abs(null) >= obsdiff)
# [1] 0.0251
p-value
: is the probability that an outcome from the null distribution is bigger than what we observed when the null hypothesis is true.
Probability Distributions
载入UsingR包中的数据集
- Cumulative Distribution Function
To define a distribution we compute, for all possible values of a, the proportion of numbers in our list that are below a:
This is called thecumulative distribution function (CDF)
.
When the CDF is derived from data, asopposed to theoretically, we also call it theempirical CDF (ECDF)
.
smallest <- floor(min(x))
largest <- ceiling(max(x))
values <- seq(smallest, largest,len=300)
heightecdf <- ecdf(x)
plot(values, heightecdf(values), type="l",
xlab="a (Height in inches)",ylab="Pr(x <= a)")
The ecdf function is a function that returns a function.
- Histograms
smallest <- floor(min(x))
largest <- ceiling(max(x))
bins <- seq(smallest, largest)
hist(x,breaks=bins,xlab="Height (in inches)",main="Adult men heights")
- Probability Distribution
if we pick a random height from our list, then the probability of it falling between a and b is denoted with:
library(rafalib)
n <- 100
nullplot(-5,5,1,30, xlab="Observed differences (grams)", ylab="Frequency")
totals <- vector("numeric",11)
for (i in 1:n) {
control <- sample(population,12)
treatment <- sample(population,12)
nulldiff <- mean(treatment) - mean(control)
j <- pmax(pmin(round(nulldiff)+6,11),1)
totals[j] <- totals[j]+1
text(j-6,totals[j],pch=15,round(nulldiff,1))
Sys.sleep(1)
}
dir <- system.file(package="dagdata")
filename <- file.path(dir,"extdata/femaleMiceWeights.csv")
dat <- read.csv(filename)
control <- filter(dat,Diet=="chow") %>%
select(Bodyweight) %>%
unlist
treatment <- filter(dat,Diet=="hf") %>%
select(Bodyweight) %>%
unlist
obsdiff <- mean(treatment) - mean(control)
n <- 10000
null <- vector('numeric', n)
for (i in 1:n){
control <- sample(population,12)
treatment <- sample(population,12)
null[i] <- mean(treatment) - mean(control)
}
hist(null)
abline(v=obsdiff, col="red", lwd=2)
Normal Distribution
The probability distribution we see above approximates one that is very common in nature:
the bell curve
, also known asthe normal distribution
orGaussian distribution
.
When the histogram of a list of numbers approximates the normal distribution, we can use a convenient mathematical formula to approximate the proportion of values or outcomes in any given interval:
pnorm(obsdiff,mean(null),sd(null))
# [1] 0.9864804
1 - pnorm(obsdiff,mean(null),sd(null))
# [1] 0.01351959
# 根据该数据集形成的正态分布中,超过obsdiff的概率为1.4%
- Standardized units
- Quantile-Quantile (QQ) Plot
edX的视频中提到Q-Q图,补充一下相关知识
- Exercises
# 构建数据
library(downloader)
url <- "https://raw.githubusercontent.com/genomicsclass/dagdata/master/inst/extd\
ata/femaleControlsPopulation.csv"
filename <- basename(url)
download(url, destfile=filename)
x <- unlist(read.csv(filename))
> 1
mean(x)
# [1] 23.89338
> 2
avg_total <- mean(x)
set.seed(1)
x_sample <- sample(x,5)
avg_sample <- mean(x_sample)
(avgdiff <- abs(avg_sample - avg_total))
# [1] 0.3293778
> 3
avg_total <- mean(x)
set.seed(5)
x_sample <- sample(x,5)
avg_sample <- mean(x_sample)
(avgdiff <- abs(avg_sample - avg_total))
# [1] 0.3813778
> 4
# C) Because the average of the samples is a random variable.
> 5
set.seed(1)
n <- 1000
null <- vector('numeric',n)
for (i in 1:n){
samp <- sample(x,5)
null[i] <- mean(samp)
}
1 - pnorm(mean(x)+1,mean(null),sd(null))
# [1] 0.2363077
hist(null)
abline(v=mean(x)+1, col="red", lwd=2)
> 6
set.seed(1)
n <- 10000
null <- vector('numeric',n)
for (i in 1:n){
samp <- sample(x,5)
null[i] <- mean(samp)
}
1 - pnorm(mean(x)+1,mean(null),sd(null))
# [1] 0.2530382
> 7
set.seed(1)
n <- 1000
null <- vector('numeric',n)
for (i in 1:n){
samp <- sample(x,50)
null[i] <- mean(samp)
}
1 - pnorm(mean(x)+1,mean(null),sd(null))
# [1] 0.01080894
> 8
set.seed(1)
n <- 1000
null <- vector('numeric',n)
for (i in 1:n){
samp <- sample(x,5)
null[i] <- mean(samp)
}
hist(null)
n <- 1000
null <- vector('numeric',n)
for (i in 1:n){
samp <- sample(x,50)
null[i] <- mean(samp)
}
hist(null)
# B) They both look roughly normal, but with a sample size of 50 the spread is smaller
> 9
set.seed(1)
n <- 1000
null <- vector('numeric',n)
for (i in 1:n){
samp <- sample(x,50)
null[i] <- mean(samp)
}
(results <- pnorm(25,mean(null),sd(null)) -
pnorm(23,mean(null),sd(null)))
# [1] 0.9766827
> 10
avg_q <- 23.9
sd_q <- 0.43
(results <- pnorm(25,avg_q,sd_q) -
pnorm(23,avg_q,sd_q))
# [1] 0.9765648
Populations, Samples and Estimates
- Population parameters
The mean:
The variance:
We refer to such quantities that can be obtained from the population as population parameters.
- Sample estimates
- Exercises
# 构建数据
library(downloader)
url <- "https://raw.githubusercontent.com/genomicsclass/dagdata/master/inst/extd\
ata/mice_pheno.csv"
filename <- basename(url)
download(url, destfile=filename)
dat <- read.csv(filename)
# 数据预处理
table(is.na(dat$Bodyweight))
# FALSE TRUE
# 841 5
dat <- na.omit(dat)
> 1
library(dplyr)
x <- dat %>%
filter((Sex == 'M') & (Diet == 'chow')) %>%
select(Bodyweight) %>%
unlist();x
mean(x)
# [1] 30.96381
> 2
library(rafalib)
?popsd
rafalib::popsd(x)
# [1] 4.420501
popsd
: population standard deviation
Returns the population variance. Note that sd returns the unbiased sample estimate of the population varaince. It simply multiplies the result of var by (n-1) / n with n the populaton size and takes the square root.
总结:
popsd返回总体方差后计算,sd返回总体方差的无偏样本估计后计算
> 3
set.seed(1)
X <- sample(x,25)
mean(X)
# [1] 30.5196
> 4
y <- dat %>%
filter((Sex == 'M') & (Diet == 'hf')) %>%
select(Bodyweight) %>%
unlist();y
mean(y)
# [1] 34.84793
> 5
rafalib::popsd(y)
# [1] 5.574609
> 6
set.seed(1)
Y <- sample(y,25)
mean(Y)
# [1] 35.8036
> 7
(popmdiff <- abs(mean(x) - mean(y)))
# [1] 3.884116
(sampmdiff <- abs(mean(X) - mean(Y)))
# [1] 5.284
> 8
x <- dat %>%
filter((Sex == 'F') & (Diet == 'chow')) %>%
select(Bodyweight) %>%
unlist();x
y <- dat %>%
filter((Sex == 'F') & (Diet == 'hf')) %>%
select(Bodyweight) %>%
unlist();y
set.seed(1)
X <- sample(x,25);X
set.seed(1)
Y <- sample(y,25);Y
(popmdiff <- abs(mean(x) - mean(y)))
# [1] 2.375517
(sampmdiff <- abs(mean(X) - mean(Y)))
# [1] 4.13
> 9
A) The population variance of the females is smaller than that of the males; thus, the sample variable has less variability.