data("PlantGrowth")
#?PlantGrowth
head(PlantGrowth)
weight group
1 4.17 ctrl
2 5.58 ctrl
3 5.18 ctrl
4 6.11 ctrl
5 4.50 ctrl
6 4.61 ctrl
ANOVA, Bayesian statistics, R programming, statistical modeling, Analysis of Variance
As an example of a one-way ANOVA, we’ll look at the Plant Growth data in R
.
data("PlantGrowth")
#?PlantGrowth
head(PlantGrowth)
weight group
1 4.17 ctrl
2 5.58 ctrl
3 5.18 ctrl
4 6.11 ctrl
5 4.50 ctrl
6 4.61 ctrl
We first load the dataset (Listing 43.1)
Because the explanatory variable group
is a factor and not continuous, we choose to visualize the data with box plots rather than scatter plots.
The box plots summarize the distribution of the data for each of the three groups. It appears that treatment 2 has the highest mean yield. It might be questionable whether each group has the same variance, but we’ll assume that is the case.
Again, we can start with the reference analysis (with a noninformative prior) with a linear model in R
.
= lm(weight ~ group, data=PlantGrowth)
lmod summary(lmod)
Call:
lm(formula = weight ~ group, data = PlantGrowth)
Residuals:
Min 1Q Median 3Q Max
-1.0710 -0.4180 -0.0060 0.2627 1.3690
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.0320 0.1971 25.527 <2e-16 ***
grouptrt1 -0.3710 0.2788 -1.331 0.1944
grouptrt2 0.4940 0.2788 1.772 0.0877 .
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6234 on 27 degrees of freedom
Multiple R-squared: 0.2641, Adjusted R-squared: 0.2096
F-statistic: 4.846 on 2 and 27 DF, p-value: 0.01591
anova(lmod)
Analysis of Variance Table
Response: weight
Df Sum Sq Mean Sq F value Pr(>F)
group 2 3.7663 1.8832 4.8461 0.01591 *
Residuals 27 10.4921 0.3886
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
plot(lmod) # for graphical residual analysis
The default model structure in R
is the linear model with dummy indicator variables. Hence, the “intercept” in this model is the mean yield for the control group. The two other parameters are the estimated effects of treatments 1 and 2. To recover the mean yield in treatment group 1, you would add the intercept term and the treatment 1 effect. To see how R
sets the model up, use the model.matrix(lmod)
function to extract the X matrix.
The anova()
function in R
compares variability of observations between the treatment groups to variability within the treatment groups to test whether all means are equal or whether at least one is different. The small p-value here suggests that the means are not all equal.
Let’s fit the cell means model in JAGS
.
library("rjags")
= " model {
mod_string for (i in 1:length(y)) {
y[i] ~ dnorm(mu[grp[i]], prec)
}
for (j in 1:3) {
mu[j] ~ dnorm(0.0, 1.0/1.0e6)
}
prec ~ dgamma(5/2.0, 5*1.0/2.0)
sig = sqrt( 1.0 / prec )
} "
set.seed(82)
str(PlantGrowth)
'data.frame': 30 obs. of 2 variables:
$ weight: num 4.17 5.58 5.18 6.11 4.5 4.61 5.17 4.53 5.33 5.14 ...
$ group : Factor w/ 3 levels "ctrl","trt1",..: 1 1 1 1 1 1 1 1 1 1 ...
= list(y=PlantGrowth$weight,
data_jags grp=as.numeric(PlantGrowth$group))
= c("mu", "sig")
params
= function() {
inits = list("mu"=rnorm(3,0.0,100.0), "prec"=rgamma(1,1.0,1.0))
inits
}
= jags.model(textConnection(mod_string), data=data_jags, inits=inits, n.chains=3) mod
Compiling model graph
Resolving undeclared variables
Allocating nodes
Graph information:
Observed stochastic nodes: 30
Unobserved stochastic nodes: 4
Total graph size: 74
Initializing model
update(mod, 1e3)
= coda.samples(model=mod,
mod_sim variable.names=params,
n.iter=5e3)
= as.mcmc(do.call(rbind, mod_sim)) # combined chains mod_csim
As usual, we check for convergence of our MCMC.
par(mar = c(2.5, 1, 2.5, 1))
plot(mod_sim)
gelman.diag(mod_sim)
Potential scale reduction factors:
Point est. Upper C.I.
mu[1] 1 1
mu[2] 1 1
mu[3] 1 1
sig 1 1
Multivariate psrf
1
autocorr.diag(mod_sim)
mu[1] mu[2] mu[3] sig
Lag 0 1.000000000 1.0000000000 1.000000000 1.000000000
Lag 1 0.013436346 0.0040470447 0.002497828 0.078567842
Lag 5 0.002980062 -0.0001053388 0.001477576 0.005634187
Lag 10 -0.006730387 -0.0060146888 0.007562672 -0.004662933
Lag 50 0.002539536 -0.0041910713 0.006462956 -0.008613151
effectiveSize(mod_sim)
mu[1] mu[2] mu[3] sig
15000.00 14961.66 15044.51 12867.41
We can also look at the residuals to see if there are any obvious problems with our model choice.
pm_params = colMeans(mod_csim)) (
mu[1] mu[2] mu[3] sig
5.0311526 4.6648831 5.5239195 0.7137073
= pm_params[1:3][data_jags$grp]
yhat = data_jags$y - yhat
resid plot(resid)
Again, it might be appropriate to have a separate variance for each group. We will have you do that as an exercise.
Let’s look at the posterior summary of the parameters.
summary(mod_sim)
Iterations = 1001:6000
Thinning interval = 1
Number of chains = 3
Sample size per chain = 5000
1. Empirical mean and standard deviation for each variable,
plus standard error of the mean:
Mean SD Naive SE Time-series SE
mu[1] 5.0312 0.22850 0.0018657 0.0018658
mu[2] 4.6649 0.22847 0.0018655 0.0018700
mu[3] 5.5239 0.22569 0.0018428 0.0018402
sig 0.7137 0.09279 0.0007576 0.0008181
2. Quantiles for each variable:
2.5% 25% 50% 75% 97.5%
mu[1] 4.5802 4.8793 5.032 5.1817 5.4794
mu[2] 4.2136 4.5132 4.665 4.8184 5.1100
mu[3] 5.0708 5.3783 5.526 5.6739 5.9631
sig 0.5594 0.6476 0.704 0.7694 0.9211
HPDinterval(mod_csim)
lower upper
mu[1] 4.5826477 5.4802294
mu[2] 4.2119728 5.1079891
mu[3] 5.0720532 5.9640235
sig 0.5458935 0.8958923
attr(,"Probability")
[1] 0.95
The HPDinterval()
function in the coda
package calculates intervals of highest posterior density for each parameter.
We are interested to know if one of the treatments increases mean yield. It is clear that treatment 1 does not. What about treatment 2?
mean(mod_csim[,3] > mod_csim[,1])
[1] 0.9392
There is a high posterior probability that the mean yield for treatment 2 is greater than the mean yield for the control group.
It may be the case that treatment 2 would be costly to put into production. Suppose that to be worthwhile, this treatment must increase mean yield by 10%. What is the posterior probability that the increase is at least that?
mean(mod_csim[,3] > 1.1*mod_csim[,1])
[1] 0.4872667
We have about 50/50 odds that adopting treatment 2 would increase mean yield by at least 10%.
Let’s explore an example with two factors. We’ll use the Warpbreaks
data set in R
. Check the documentation for a description of the data by typing ?warpbreaks
.
data("warpbreaks")
#?warpbreaks
head(warpbreaks)
breaks wool tension
1 26 A L
2 30 A L
3 54 A L
4 25 A L
5 70 A L
6 52 A L
# This chunk is for displaying the output that was previously static.
# If the static output below is preferred, this chunk can be removed
# and the static output remains unlabelled as it's not a code cell.
# For a labeled table, this chunk should generate it.
# The original file had static output here:
## breaks wool tension
## 1 26 A L
## 2 30 A L
## 3 54 A L
## 4 25 A L
## 5 70 A L
## 6 52 A L
# To make this a labeled table from code:
head(warpbreaks)
breaks wool tension
1 26 A L
2 30 A L
3 54 A L
4 25 A L
5 70 A L
6 52 A L
table(warpbreaks$wool, warpbreaks$tension)
L M H
A 9 9 9
B 9 9 9
Again, we visualize the data with box plots.
boxplot(log(breaks) ~ wool + tension, data=warpbreaks)
The different groups have more similar variance if we use the logarithm of breaks. From this visualization, it looks like both factors may play a role in the number of breaks. It appears that there is a general decrease in breaks as we move from low to medium to high tension. Let’s start with a one-way model using tension only.
= " model {
mod1_string for( i in 1:length(y)) {
y[i] ~ dnorm(mu[tensGrp[i]], prec)
}
for (j in 1:3) {
mu[j] ~ dnorm(0.0, 1.0/1.0e6)
}
prec ~ dgamma(5/2.0, 5*2.0/2.0)
sig = sqrt(1.0 / prec)
} "
set.seed(83)
str(warpbreaks)
'data.frame': 54 obs. of 3 variables:
$ breaks : num 26 30 54 25 70 52 51 26 67 18 ...
$ wool : Factor w/ 2 levels "A","B": 1 1 1 1 1 1 1 1 1 1 ...
$ tension: Factor w/ 3 levels "L","M","H": 1 1 1 1 1 1 1 1 1 2 ...
= list(y=log(warpbreaks$breaks), tensGrp=as.numeric(warpbreaks$tension))
data1_jags
= c("mu", "sig")
params1
= jags.model(textConnection(mod1_string), data=data1_jags, n.chains=3) mod1
Compiling model graph
Resolving undeclared variables
Allocating nodes
Graph information:
Observed stochastic nodes: 54
Unobserved stochastic nodes: 4
Total graph size: 123
Initializing model
update(mod1, 1e3)
= coda.samples(model=mod1,
mod1_sim variable.names=params1,
n.iter=5e3)
## convergence diagnostics
plot(mod1_sim)
gelman.diag(mod1_sim)
Potential scale reduction factors:
Point est. Upper C.I.
mu[1] 1 1
mu[2] 1 1
mu[3] 1 1
sig 1 1
Multivariate psrf
1
autocorr.diag(mod1_sim)
mu[1] mu[2] mu[3] sig
Lag 0 1.000000000 1.000000000 1.000000000 1.000000000
Lag 1 0.005742044 -0.002630648 -0.006219235 0.040790637
Lag 5 0.006183362 -0.021405393 -0.007143333 0.005450553
Lag 10 -0.004945709 0.004002700 -0.017905549 -0.001976899
Lag 50 -0.001788643 -0.011171937 0.002705026 0.001421458
effectiveSize(mod1_sim)
mu[1] mu[2] mu[3] sig
15000.00 14662.15 15000.00 13809.47
The 95% posterior interval for the mean of group 2 (medium tension) overlaps with both the low and high groups, but the intervals for low and high group only slightly overlap. That is a pretty strong indication that the means for low and high tension are different. Let’s collect the DIC for this model and move on to the two-way model.
= dic.samples(mod1, n.iter=1e3) dic1
With two factors, one with two levels and the other with three, we have six treatment groups, which is the same situation we discussed when introducing multiple factor ANOVA. We will first fit the additive model which treats the two factors separately with no interaction. To get the X matrix (or design matrix) for this model, we can create it in R
.
= model.matrix( ~ wool + tension, data=warpbreaks)
X head(X)
(Intercept) woolB tensionM tensionH
1 1 0 0 0
2 1 0 0 0
3 1 0 0 0
4 1 0 0 0
5 1 0 0 0
6 1 0 0 0
tail(X)
(Intercept) woolB tensionM tensionH
49 1 1 0 1
50 1 1 0 1
51 1 1 0 1
52 1 1 0 1
53 1 1 0 1
54 1 1 0 1
By default, R
has chosen the mean for wool A and low tension to be the intercept. Then, there is an effect for wool B, and effects for medium tension and high tension, each associated with dummy indicator variables.
= " model {
mod2_string for( i in 1:length(y)) {
y[i] ~ dnorm(mu[i], prec)
mu[i] = int + alpha*isWoolB[i] + beta[1]*isTensionM[i] + beta[2]*isTensionH[i]
}
int ~ dnorm(0.0, 1.0/1.0e6)
alpha ~ dnorm(0.0, 1.0/1.0e6)
for (j in 1:2) {
beta[j] ~ dnorm(0.0, 1.0/1.0e6)
}
prec ~ dgamma(3/2.0, 3*1.0/2.0)
sig = sqrt(1.0 / prec)
} "
= list(y=log(warpbreaks$breaks), isWoolB=X[,"woolB"], isTensionM=X[,"tensionM"], isTensionH=X[,"tensionH"])
data2_jags
= c("int", "alpha", "beta", "sig")
params2
= jags.model(textConnection(mod2_string), data=data2_jags, n.chains=3) mod2
Compiling model graph
Resolving undeclared variables
Allocating nodes
Graph information:
Observed stochastic nodes: 54
Unobserved stochastic nodes: 5
Total graph size: 243
Initializing model
update(mod2, 1e3)
= coda.samples(model=mod2,
mod2_sim variable.names=params2,
n.iter=5e3)
## convergence diagnostics
plot(mod2_sim)
gelman.diag(mod2_sim) # Corrected from mod1_sim
Potential scale reduction factors:
Point est. Upper C.I.
alpha 1 1.00
beta[1] 1 1.01
beta[2] 1 1.01
int 1 1.01
sig 1 1.00
Multivariate psrf
1
autocorr.diag(mod2_sim) # Corrected from mod1_sim
alpha beta[1] beta[2] int sig
Lag 0 1.000000000 1.00000000 1.00000000 1.000000000 1.000000000
Lag 1 0.495044676 0.49944082 0.51247433 0.748010057 0.090771889
Lag 5 0.027011090 0.11017637 0.10296294 0.191450381 0.015087829
Lag 10 0.003729318 0.03354112 0.03263936 0.046551367 -0.013536313
Lag 50 -0.009067024 0.01151136 0.01526284 -0.001878944 0.007605379
effectiveSize(mod2_sim) # Corrected from mod1_sim
alpha beta[1] beta[2] int sig
5066.383 3838.164 3738.536 2416.258 11609.870
Let’s summarize the results, collect the DIC for this model, and compare it to the first one-way model.
summary(mod2_sim)
Iterations = 1001:6000
Thinning interval = 1
Number of chains = 3
Sample size per chain = 5000
1. Empirical mean and standard deviation for each variable,
plus standard error of the mean:
Mean SD Naive SE Time-series SE
alpha -0.1521 0.12335 0.0010071 0.001734
beta[1] -0.2887 0.15147 0.0012368 0.002450
beta[2] -0.4899 0.15321 0.0012510 0.002506
int 3.5771 0.12318 0.0010058 0.002506
sig 0.4545 0.04506 0.0003679 0.000419
2. Quantiles for each variable:
2.5% 25% 50% 75% 97.5%
alpha -0.3957 -0.2334 -0.1521 -0.06865 0.08797
beta[1] -0.5814 -0.3895 -0.2888 -0.18691 0.01207
beta[2] -0.7942 -0.5913 -0.4888 -0.38845 -0.18664
int 3.3360 3.4948 3.5781 3.65807 3.81944
sig 0.3771 0.4226 0.4510 0.48197 0.55261
dic2 = dic.samples(mod2, n.iter=1e3)) (
Mean deviance: 55.67
penalty 5.378
Penalized deviance: 61.05
dic1
Mean deviance: 66.33
penalty 3.956
Penalized deviance: 70.28
This suggests there is much to be gained adding the wool factor to the model. Before we settle on this model however, we should consider whether there is an interaction. Let’s look again at the box plot with all six treatment groups.
boxplot(log(breaks) ~ wool + tension, data=warpbreaks)
Our two-way model has a single effect for wool B and the estimate is negative. If this is true, then we would expect wool B to be associated with fewer breaks than its wool A counterpart on average. This is true for low and high tension, but it appears that breaks are higher for wool B when there is medium tension. That is, the effect for wool B is not consistent across tension levels, so it may appropriate to add an interaction term. In R
, this would look like:
= lm(log(breaks) ~ .^2, data=warpbreaks)
lmod2 summary(lmod2)
Call:
lm(formula = log(breaks) ~ .^2, data = warpbreaks)
Residuals:
Min 1Q Median 3Q Max
-0.81504 -0.27885 0.04042 0.27319 0.64358
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.7179 0.1247 29.824 < 2e-16 ***
woolB -0.4356 0.1763 -2.471 0.01709 *
tensionM -0.6012 0.1763 -3.410 0.00133 **
tensionH -0.6003 0.1763 -3.405 0.00134 **
woolB:tensionM 0.6281 0.2493 2.519 0.01514 *
woolB:tensionH 0.2221 0.2493 0.891 0.37749
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.374 on 48 degrees of freedom
Multiple R-squared: 0.3363, Adjusted R-squared: 0.2672
F-statistic: 4.864 on 5 and 48 DF, p-value: 0.001116
Adding the interaction, we get an effect for being in wool B and medium tension, as well as for being in wool B and high tension. There are now six parameters for the mean, one for each treatment group, so this model is equivalent to the full cell means model. Let’s use that.
In this new model, \mu will be a matrix with six entries, each corresponding to a treatment group.
= " model {
mod3_string for( i in 1:length(y)) {
y[i] ~ dnorm(mu[woolGrp[i], tensGrp[i]], prec)
}
for (j in 1:max(woolGrp)) {
for (k in 1:max(tensGrp)) {
mu[j,k] ~ dnorm(0.0, 1.0/1.0e6)
}
}
prec ~ dgamma(3/2.0, 3*1.0/2.0)
sig = sqrt(1.0 / prec)
} "
str(warpbreaks)
'data.frame': 54 obs. of 3 variables:
$ breaks : num 26 30 54 25 70 52 51 26 67 18 ...
$ wool : Factor w/ 2 levels "A","B": 1 1 1 1 1 1 1 1 1 1 ...
$ tension: Factor w/ 3 levels "L","M","H": 1 1 1 1 1 1 1 1 1 2 ...
= list(y=log(warpbreaks$breaks), woolGrp=as.numeric(warpbreaks$wool), tensGrp=as.numeric(warpbreaks$tension))
data3_jags
= c("mu", "sig")
params3
= jags.model(textConnection(mod3_string), data=data3_jags, n.chains=3) mod3
Compiling model graph
Resolving undeclared variables
Allocating nodes
Graph information:
Observed stochastic nodes: 54
Unobserved stochastic nodes: 7
Total graph size: 179
Initializing model
update(mod3, 1e3)
= coda.samples(model=mod3,
mod3_sim variable.names=params3,
n.iter=5e3)
= as.mcmc(do.call(rbind, mod3_sim)) mod3_csim
plot(mod3_sim)
## convergence diagnostics
gelman.diag(mod3_sim)
Potential scale reduction factors:
Point est. Upper C.I.
mu[1,1] 1 1
mu[2,1] 1 1
mu[1,2] 1 1
mu[2,2] 1 1
mu[1,3] 1 1
mu[2,3] 1 1
sig 1 1
Multivariate psrf
1
autocorr.diag(mod3_sim)
mu[1,1] mu[2,1] mu[1,2] mu[2,2] mu[1,3]
Lag 0 1.000000000 1.000000000 1.0000000000 1.000000000 1.000000000
Lag 1 -0.001334991 0.004444903 0.0015227061 0.010184464 -0.001244189
Lag 5 -0.003555747 0.001778168 0.0023689915 0.004411321 0.005285573
Lag 10 0.003529375 0.004347127 -0.0004511219 -0.001941470 -0.013546302
Lag 50 0.018361471 -0.004609204 0.0061336536 -0.006919621 -0.018148061
mu[2,3] sig
Lag 0 1.000000000 1.000000000
Lag 1 0.011081118 0.112549919
Lag 5 0.001132429 -0.014073448
Lag 10 -0.008136803 -0.002108669
Lag 50 -0.006141981 -0.012100209
effectiveSize(mod3_sim)
mu[1,1] mu[2,1] mu[1,2] mu[2,2] mu[1,3] mu[2,3] sig
15000.00 15000.00 15000.00 15148.71 15000.00 15011.69 12293.72
raftery.diag(mod3_sim)
[[1]]
Quantile (q) = 0.025
Accuracy (r) = +/- 0.005
Probability (s) = 0.95
Burn-in Total Lower bound Dependence
(M) (N) (Nmin) factor (I)
mu[1,1] 2 3866 3746 1.030
mu[2,1] 2 3930 3746 1.050
mu[1,2] 3 4062 3746 1.080
mu[2,2] 3 4062 3746 1.080
mu[1,3] 2 3803 3746 1.020
mu[2,3] 2 3741 3746 0.999
sig 2 3866 3746 1.030
[[2]]
Quantile (q) = 0.025
Accuracy (r) = +/- 0.005
Probability (s) = 0.95
Burn-in Total Lower bound Dependence
(M) (N) (Nmin) factor (I)
mu[1,1] 2 3680 3746 0.982
mu[2,1] 2 3995 3746 1.070
mu[1,2] 2 3866 3746 1.030
mu[2,2] 2 3741 3746 0.999
mu[1,3] 2 3620 3746 0.966
mu[2,3] 2 3866 3746 1.030
sig 2 3620 3746 0.966
[[3]]
Quantile (q) = 0.025
Accuracy (r) = +/- 0.005
Probability (s) = 0.95
Burn-in Total Lower bound Dependence
(M) (N) (Nmin) factor (I)
mu[1,1] 2 3741 3746 0.999
mu[2,1] 2 3995 3746 1.070
mu[1,2] 2 3741 3746 0.999
mu[2,2] 2 3930 3746 1.050
mu[1,3] 2 3995 3746 1.070
mu[2,3] 2 3866 3746 1.030
sig 2 3741 3746 0.999
Let’s compute the DIC and compare with our previous models.
dic3 = dic.samples(mod3, n.iter=1e3)) (
Mean deviance: 52.08
penalty 7.298
Penalized deviance: 59.38
dic2
Mean deviance: 55.67
penalty 5.378
Penalized deviance: 61.05
dic1
Mean deviance: 66.33
penalty 3.956
Penalized deviance: 70.28
This suggests that the full model with interaction between wool and tension (which is equivalent to the cell means model) is the best for explaining/predicting warp breaks.
summary(mod3_sim)
Iterations = 1001:6000
Thinning interval = 1
Number of chains = 3
Sample size per chain = 5000
1. Empirical mean and standard deviation for each variable,
plus standard error of the mean:
Mean SD Naive SE Time-series SE
mu[1,1] 3.7169 0.14976 0.0012228 0.0012228
mu[2,1] 3.2817 0.14970 0.0012223 0.0012223
mu[1,2] 3.1166 0.14799 0.0012083 0.0012083
mu[2,2] 3.3102 0.14819 0.0012100 0.0012050
mu[1,3] 3.1174 0.14847 0.0012123 0.0012122
mu[2,3] 2.9048 0.14881 0.0012151 0.0012146
sig 0.4433 0.04461 0.0003642 0.0004038
2. Quantiles for each variable:
2.5% 25% 50% 75% 97.5%
mu[1,1] 3.4217 3.618 3.7172 3.8164 4.0116
mu[2,1] 2.9868 3.181 3.2833 3.3820 3.5764
mu[1,2] 2.8267 3.017 3.1166 3.2149 3.4079
mu[2,2] 3.0212 3.212 3.3093 3.4103 3.6025
mu[1,3] 2.8294 3.017 3.1178 3.2160 3.4087
mu[2,3] 2.6089 2.805 2.9043 3.0030 3.1978
sig 0.3656 0.412 0.4399 0.4707 0.5417
HPDinterval(mod3_csim)
lower upper
mu[1,1] 3.4239232 4.0130138
mu[2,1] 2.9885124 3.5768401
mu[1,2] 2.8225221 3.4006026
mu[2,2] 3.0247120 3.6054108
mu[1,3] 2.8350527 3.4121496
mu[2,3] 2.6040120 3.1916467
sig 0.3605344 0.5337449
attr(,"Probability")
[1] 0.95
par(mfrow=c(3,2)) # arrange frame for plots
densplot(mod3_csim[,1:6], xlim=c(2.0, 4.5))
It might be tempting to look at comparisons between each combination of treatments, but we warn that this could yield spurious results. When we discussed the statistical modeling cycle, we said it is best not to search your results for interesting hypotheses, because if there are many hypotheses, some will appear to show “effects” or “associations” simply due to chance. Results are most reliable when we determine a relatively small number of hypotheses we are interested in beforehand, collect the data, and statistically evaluate the evidence for them.
One question we might be interested in with these data is finding the treatment combination that produces the fewest breaks. To calculate this, we can go through our posterior samples and for each sample, find out which group has the smallest mean. These counts help us determine the posterior probability that each of the treatment groups has the smallest mean.
prop.table( table( apply(mod3_csim[,1:6], 1, which.min) ) )
2 3 4 5 6
0.01626667 0.11626667 0.00920000 0.12046667 0.73780000
The evidence supports wool B with high tension as the treatment that produces the fewest breaks.