Power Analysis in R
Introduction
This tutorial introduces power analysis using R. Power analysis is a method primarily used to determine the appropriate sample size for empirical studies.
This tutorial is aimed at intermediate and advanced users of R with the aim of showcasing how to perform power analyses for basic inferential tests using the pwr
package (Champely 2020) and for mixed-effect models (generated with the lme4
package) using the simr
package (Green and MacLeod 2016) in R. The aim is not to provide a fully-fledged analysis but rather to show and exemplify a handy method for estimating the power of experimental and observational designs and how to implement this in R.
A list of very recommendable papers discussing research on effect sizes and power analyses on linguistic data can be found here. Here are some main findings coming from these papers as stated in Brysbaert and Stevens (2018) and on the http://crr.ugent.be website:
In experimental psychology we can do replicable research with 20 participants or less if we have multiple observations per participant per condition, because we can turn rather small differences between conditions into effect sizes of d > .8 by averaging across observations.
The more sobering finding is that the required number of observations is higher than the numbers currently used (which is why we run underpowered studies). The ballpark figure we propose for RT experiments with repeated measures is 1600 observations per condition (e.g., 40 participants and 40 stimuli per condition).
The 1600 observations we propose is when you start a new line of research and don’t know what to expect.
Standardized effect sizes in analyses over participants (e.g., Cohen’s d) depend on the number of stimuli that were presented. Hence, you must include the same number of observations per condition if you want to replicate the results. The fact that the effect size depends on the number of stimuli also has implications for meta-analyses.
The entire R Notebook for the tutorial can be downloaded here. If you want to render the R Notebook on your machine, i.e. knitting the document to html or a pdf, you need to make sure that you have R and RStudio installed and you also need to download the bibliography file and store it in the same folder where you store the Rmd file.
The basis for the present tutorial is Green and MacLeod (which you can find here). Green and MacLeod is a highly recommendable and thorough tutorial on performing power analysis in R. Recommendable literature on this topic are, e.g. Arnold et al. and Johnson et al. and this tutorial.
What determines if you find an effect?
To explore this issue, let us have a look at some distributions of samples with varying features sampled from either one or two distributions.
Let us start with the distribution of two samples (N = 30) sampled from the same population.
The means of the samples are very similar and a t-test confirms that we can assume that the samples are drawn from the same population (as the p-value is greater than .05).
Let us now draw another two samples (N = 30) but from different populations where the effect of group is weak (the population difference is small).
Let us briefly check if the effect size, Cohen’s d of 0.2, is correct (for this, we increase the sample size dramatically to get very accurate estimates). If the above effect size is correct (Cohen’s d = 0.2), then the reported effect size should be 0.2 (or -0.2). This is so because Cohen’s d represents distances between group means measured in standard deviations (in our example, the standard deviation is 10 and the difference between the means is 2, i.e., 20 percent of a standard deviation; which is equal to a Cohen’s d value of 0.2). Let’s check the effect size now (we will only do this for this distribution but you can easily check for yourself that the effect sizes provided in the plot headers below are correct by adapting the code chunk below and using the numbers provided in the plot headers).
# generate two vectors with numbers using the means and sd from above
# means = 101, 99, sd = 10
<- rnorm(1000000, mean = 101, sd = 10)
num1 <- rnorm(1000000, mean = 99, sd = 10)
num2 # perform t-test
<- t.test(num1, num2, alternative = "two.sided")
tt # extract effect size
::effectsize(tt) effectsize
Cohen's d | 95% CI
------------------
0.20 |
- Estimated using un-pooled SD.
Based on the t-test results in the distribution plot, we assume that the data represents two samples from the same population (which is false) because the p-value is higher than .05. This means that the sample size is insufficient or does not enough power to show that they are actually from two different populations given the variability in the data nd the size of the effect (which is weak).
What happens if we increase the effect size to medium? This means that we again draw two samples from two different populations but the difference between the populations is a bit larger (group has a medium effect, Cohen’s d = .5)
Now, lets have a look at at the distribution of two different groups (group has a strong effect, Cohen’s d = .8)
Now, lets have a look at at the distribution of two different groups (group has a very strong effect, Cohen’s d = 1.2)
If variability and sample size remain constant, larger effects are easier to detect than smaller effects!
Sample size
And let’s now look at sample size.
Let us now increase the sample size to N = 50.
If variability and effect size remain constant, effects are easier to detect with increasing sample size!
Variability
And let’s now look at variability.
Let’s decrease the variability to sd = 5.
If the sample and effect size remain constant, effects are easier to detect with decreasing variability!
In summary, there are three main factors that determine if a model finds an effect. The accuracy (i.e., the probability of finding an effect):
- the size of the effect (bigger effects are easier to detect)
- the variability of the effect (less variability makes it easier to detect an effect), and
- the sample size (the bigger the sample size, the easier it is to detect an effect);
- number of subjects/participants
- number of items/questions
- number of observations per item within subjects/participants
Now, if a) we dealing with a very big effect, then we need only few participants and few items to accurately find this effect.
Or b) if we dealing with an effect that has low variability (it is observable for all subjects with the same strength), then we need only few participants and few items to accurately find this effect.
Before we conduct a study, we should figure out, what sample we need to detect a small/medium effect with medium variability so that our model is sufficient to detect this kind of effect. In order to do this, we would generate a data set that mirrors the kind of data that we expect to get (with the properties that we expect to get). We can then fit a model to this data and check if a model would be able to detect the expected effect. However, because a single model does not tell us that much (it could simply be luck that it happened to find the effect), we run many different models on variations of the data and see how many of them find the effect. As a general rule of thumb, we want a data set that allows a model to find a medium sized effect with at least an accuracy of 80 percent (Field et al.).
In the following, we will go through how to determine what sample size we need for an example analysis.
Preparation and session set up
This tutorial is based on R. If you have not installed R or are new to it, you will find an introduction to and more information how to use R here. For this tutorials, we need to install certain packages into the R library on your computer so that the scripts shown below are executed without errors. Before turning to the code below, please install the packages by running the code below this paragraph. If you have already installed the packages mentioned below, then you can skip ahead and ignore this section. To install the necessary packages, simply run the following code - it may take some time (between 1 and 5 minutes to install all of the libraries so you do not need to worry if it takes some time).
# install libraries
install.packages(c("tidyverse", "pwr", "lme4", "sjPlot", "simr", "effectsize"))
install.packages(c("DT", "knitr", "flextable"))
install.packages("DescTools")
install.packages("pacman")
# install klippy for copy-to-clipboard button in code chunks
install.packages("remotes")
::install_github("rlesur/klippy") remotes
Now that we have installed the packages, we can activate them as shown below.
# set options
options(stringsAsFactors = F) # no automatic conversion of factors
options("scipen" = 100, "digits" = 4) # suppress math annotation
options(max.print = 1000) # print max 1000 results
# load packages
library(tidyverse)
library(pwr)
library(lme4)
library(sjPlot)
library(simr)
library(effectsize)
library(DT)
library(knitr)
library(flextable)
::p_load_gh("trinker/entity")
pacman# activate klippy for copy-to-clipboard button
::klippy() klippy
Once you have installed R and RStudio and initiated the session by executing the code shown above, you are good to go.
Basic Power Analysis
Let’s start with a simple power analysis to see how power analyses work for simpler or basic statistical tests such as t-test, \(\chi\)2-test, or linear regression.
The pwr
package (Champely 2020) implements power analysis as outlined by Cohen and allows to perform power analyses for the following tests (selection):
balanced one way ANOVA (
pwr.anova.test
)chi-square test (
pwr.chisq.test
)Correlation (
pwr.r.test
)general linear model (
pwr.f2.test
)paired (one and two sample) t-test (
pwr.t.test
)two sample t-test with unequal N (
pwr.t2n.test
)
For each of these functions, you enter three of the four parameters (effect size, sample size, significance level, power) and the fourth is calculated.
So how does this work? Have a look at the code chunk below.
One-way ANOVA
Let check how to calculate the necessary sample size for each group for a one-way ANOVA that compares 5 groups (k
) and that has a power of 0.80 (80 percent), when the effect size is moderate (f = 0.25) and the significance level is 0.05 (5 percent)..
# load package
library(pwr)
# calculate minimal sample size
pwr.anova.test(
k = 5, # 5 groups are compared
f = .25, # moderate effect size
sig.level = .05, # alpha/sig. level = .05
power = .8
# confint./power = .8 )
Balanced one-way analysis of variance power calculation
k = 5
n = 39.15
f = 0.25
sig.level = 0.05
power = 0.8
NOTE: n is number in each group
In this case, the minimum number of participants in each group would be 40.
Let’s check how we could calculate the power if we had already collected data (with 30 participants in each group) and we want to report the power of our analysis (and let us assume that the effect size was medium).
# calculate minimal sample size
pwr.anova.test(
k = 5, # 5 groups are compared
f = .25, # moderate effect size
sig.level = .05, # alpha/sig. level = .05
n = 30
# n of participants )
Balanced one-way analysis of variance power calculation
k = 5
n = 30
f = 0.25
sig.level = 0.05
power = 0.6676
NOTE: n is number in each group
In this case our analysis would only have a power or 66.8 percent. This means that we would only detect a medium sized effect in 66.7 percent of cases (which is considered insufficient).
Power Analysis for GLMs
When determining the power of a generalized linear model, we need to provide
the degrees of freedom for numerator (´u´)
the degrees of freedom for denominator (
v
)the effect size (the estimate of the intercept and the slope/estimates of the predictors)
the level of significance (i.e., the type I error probability)
The values of the parameters in the example below are adapted from the fixed-effects regression example that was used to analyze different teaching styles (see here).
The effect size used here is \(f^2^\) that has be categorized as follows (see Cohen): small \(≥\) 0.02, medium \(≥\) 0.15, and large \(≥\) 0.35. So in order to determine if the data is sufficient to find a weak effect when comparing 2 groups with 30 participants in both groups (df_numerator_: 2-1; df_denominator_: (30-1) + (30-1)) and a significance level at \(\alpha\) = .05, we can use the following code.
# general linear model
<- pwr.f2.test(
pwrglm u = 1,
v = 58,
f2 = .02,
sig.level = 0.05
)# inspect results
pwrglm
Multiple regression power calculation
u = 1
v = 58
f2 = 0.02
sig.level = 0.05
power = 0.1899
The results show that the regression analyses used to evaluate the effectiveness of different teaching styles only had a power of 0.1899206.
Power Analysis for t-tests
For t-tests (both paired and 2-sample t-tests), the effect size measure is Cohen’s \(d\) that has be categorized as follows (see Cohen): small \(≥\) 0.2, medium \(≥\) 0.5, and large \(≥\) 0.8.
Paired t-test
So in order to determine if the data is sufficient to find a weak effect when comparing the pre- and post-test results of a group with 30 participants, evaluating an undirected hypothesis (thus the two-tailed approach), and a significance level at \(\alpha\) = .05, we can use the following code.
# paired t-test
<- pwr.t.test(
pwrpt d = 0.2,
n = 30,
sig.level = 0.05,
type = "paired",
alternative = "two.sided"
)# inspect
pwrpt
Paired t test power calculation
n = 30
d = 0.2
sig.level = 0.05
power = 0.1852
alternative = two.sided
NOTE: n is number of *pairs*
Given the data, a weak effect in this design can only be detected with a certainty of 0.1851834 percent. This means that we would need to substantively increase the sample size to detect a small effect with this design.
Two-sample (independent) t-test
The power in a similar scenario but with two different groups (with 25 and 35 subjects) can be determined as follows (in this case we test a directed hypothesis that checks if the intervention leads to an increase in the outcome - hence the greater in the alternative
argument):
# independent t-test
<- pwr.t2n.test(
pwr2t d = 0.2,
n1 = 35,
n2 = 25,
sig.level = 0.05,
alternative = "greater"
)# inspect
pwr2t
t test power calculation
n1 = 35
n2 = 25
d = 0.2
sig.level = 0.05
power = 0.1867
alternative = greater
Given the data, a weak effect in this design can only be detected with a certainty of 0.1867382 percent. This means that we would need to substantively increase the sample size to detect a small effect with this design.
Power Analysis for \(\chi\)2-tests
Let us now check the power of a \(\chi^2\)^-test. For \(\chi^2\)^-test, the effect size measure used in the power analysis is \(w\) that has be categorized as follows (see Cohen): small \(≥\) 0.1, medium \(≥\) 0.3, and large \(≥\) 0.5. Also, keep in mind that for \(\chi^2\)^-tests, at least 80 percent of cells need to have values \(≥\) 5 and none of the cells should have expected values smaller than 1 (see Bortz, Lienert, and Boehnke).
# x2-test
<- pwr.chisq.test(
pwrx2 w = 0.2,
N = 25, # total number of observations
df = 1,
sig.level = 0.05
)# inspect
pwrx2
Chi squared power calculation
w = 0.2
N = 25
df = 1
sig.level = 0.05
power = 0.1701
NOTE: N is the number of observations
Given the data, a weak effect in this design can only be detected with a certainty of 0.170075 percent. This means that we would need to substantively increase the sample size to detect a small effect with this design.
EXERCISE TIME!
`
- For the tests above (anova, glm, paired and independent t-test, and the \(\chi^2\)-test), how many participants would you need to have a power of 80 percent?
Here are the commands we used to help you:
ANOVA:
pwr.anova.test(k=5, f=.25, sig.level=.05, n=30)
GLM:
pwr.f2.test(u = 1, v = 58, f2 = .02, sig.level = 0.05)
paired t-test:
pwr.t.test(d=0.2, n=30, sig.level=0.05, type="paired", alternative="two.sided")
independent t-test:
pwr.t2n.test(d=0.2,n1=35, n2 = 25, sig.level=0.05, alternative="greater")
\(\chi^2\)-test:
pwr.chisq.test(w=0.2, N = 25, df = 1, sig.level=0.05)
Answer
pwr.anova.test(k = 5, f = .25, sig.level = .05, p = .8)
Balanced one-way analysis of variance power calculation
k = 5
n = 39.15
f = 0.25
sig.level = 0.05
power = 0.8
NOTE: n is number in each group
pwr.f2.test(u = 1, f2 = .02, sig.level = 0.05, p = .8)
Multiple regression power calculation
u = 1
v = 392.4
f2 = 0.02
sig.level = 0.05
power = 0.8
pwr.t.test(d = 0.2, p = .8, sig.level = 0.05, type = "paired", alternative = "two.sided")
Paired t test power calculation
n = 198.2
d = 0.2
sig.level = 0.05
power = 0.8
alternative = two.sided
NOTE: n is number of *pairs*
pwr.t2n.test(d = 0.2, n1 = 310, n2 = 310, sig.level = 0.05, alternative = "greater") # by checking N values
t test power calculation
n1 = 310
n2 = 310
d = 0.2
sig.level = 0.05
power = 0.8002
alternative = greater
pwr.chisq.test(w = 0.2, p = .8, df = 1, sig.level = 0.05)
Chi squared power calculation
w = 0.2
N = 196.2
df = 1
sig.level = 0.05
power = 0.8
NOTE: N is the number of observations
Excursion: Language is never ever random
In 2005, Adam Kilgarriff (2005) made a point that Language is never, ever, ever, random. Here is part of the abstract:
Language users never choose words randomly, and language is essentially non-random. Statistical hypothesis testing uses a null hypothesis, which posits randomness. Hence, when we look at linguistic phenomena in corpora, the null hypothesis will never be true. Moreover, where there is enough data, we shall (almost) always be able to establish that it is not true. In corpus studies, we frequently do have enough data, so the fact that a relation between two phenomena is demonstrably non-random, does not support the inference that it is not arbitrary. We present experimental evidence of how arbitrary associations between word frequencies and corpora are systematically non-random.
This is a problem because if we are using ever bigger corpora, even the tiniest of difference will become significant. have a look at the following example.
# first let us generate some data
<- matrix(c(10, 28, 30, 92), byrow = T, ncol = 2)
freqs1 # inspect data
freqs1
[,1] [,2]
[1,] 10 28
[2,] 30 92
Now, we perform a simple \(\chi^2\)-test and extract the effect size.
# x2-test
<- chisq.test(freqs1)
x21 # inspect results
x21
Pearson's Chi-squared test with Yates' continuity correction
data: freqs1
X-squared = 0, df = 1, p-value = 1
# effect size
::effectsize(x21) effectsize
Cramer's V (adj.) | 95% CI
--------------------------------
0.00 | [0.00, 1.00]
- One-sided CIs: upper bound fixed at [1.00].
the output shows that the difference is not significant and that the effect size is extremely(!) small.
Now, let us simply increase the sample size by a factor of 1000 and also perform a \(\chi^2\)-test on this extended data set and extract the effect size.
#
<- matrix(c(10000, 28000, 30000, 92000), byrow = T, ncol = 2)
freqs2 # first let us generate some data
<- chisq.test(freqs2)
x22 # inspect results
x22
Pearson's Chi-squared test with Yates' continuity correction
data: freqs2
X-squared = 46, df = 1, p-value = 0.00000000001
# effect size
::effectsize(x22) effectsize
Cramer's V (adj.) | 95% CI
--------------------------------
0.02 | [0.01, 1.00]
- One-sided CIs: upper bound fixed at [1.00].
The output shows that the difference is not significant but that the effect size has remain the same. For this reason, in a response to Kilgariff, Stefan (Gries 2005) suggested to always also report effect size in addition to significance so that the reader has an understanding of whether a significant effect is meaningful.
This is relevant here because we have focused on the power for finding small effects as these can be considered the smallest meaningful effects. However, even tiny effects can be meaningful under certain circumstances - as such, focusing on small effects is only a rule of thumb, should be taken with a pinch of salt, and should be re-evaluated in the context of the study at hand.
Advanced Power Analysis
In this section, we will perform power analyses for mixed-effects models (both linear and generalized linear mixed models). The basis for this section is Green and MacLeod (which you can find here). Green and MacLeod is a highly recommendable and thorough tutorial on performing power analysis in R. Recommendable literature on this topic are, e.g. Arnold et al. and Johnson et al. and this tutorial.
Before we conduct a study, we should figure out, what sample we need to detect a small/medium effect with medium variability so that our model is sufficient to detect this kind of effect. In order to do this, we would generate a data set that mirrors the kind of data that we expect to get (with the properties that we expect to get). We can then fit a model to this data and check if a model would be able to detect the expected effect. However, because a single model does not tell us that much (it could simply be luck that it happened to find the effect), we run many different models on variations of the data and see how many of them find the effect. As a general rule of thumb, we want a data set that allows a model to find a medium sized effect with at least an accuracy of 80 percent (Field et al.).
In the following, we will go through how to determine what sample size we need for an example analysis.
Using piloted data and a lmer
For this analysis, we load an existing data set resulting from a piloted study that only contains the predictors (not the response variable).
# load data
<- base::readRDS("tutorials/pwr/data/regdat.rda", "rb")
regdat # inspect
head(regdat, 10)
ID Sentence Group WordOrder SentenceType
1 Part01 Sentence01 L1English V2 NoAdverbial
2 Part01 Sentence02 L1English V3 NoAdverbial
3 Part01 Sentence03 L1English V3 NoAdverbial
4 Part01 Sentence04 L1English V2 NoAdverbial
5 Part01 Sentence05 L1English V2 NoAdverbial
6 Part01 Sentence06 L1English V3 NoAdverbial
7 Part01 Sentence07 L1English V2 NoAdverbial
8 Part01 Sentence08 L1English V3 NoAdverbial
9 Part01 Sentence09 L1English V3 NoAdverbial
10 Part01 Sentence10 L1English V2 NoAdverbial
str(regdat)
'data.frame': 480 obs. of 5 variables:
$ ID : Factor w/ 20 levels "Part01","Part02",..: 1 1 1 1 1 1 1 1 1 1 ...
$ Sentence : Factor w/ 24 levels "Sentence01","Sentence02",..: 1 2 3 4 5 6 7 8 9 10 ...
$ Group : Factor w/ 2 levels "L1English","L2English": 1 1 1 1 1 1 1 1 1 1 ...
$ WordOrder : Factor w/ 2 levels "V2","V3": 1 2 2 1 1 2 1 2 2 1 ...
$ SentenceType: Factor w/ 2 levels "NoAdverbial",..: 1 1 1 1 1 1 1 1 1 1 ...
We inspect the data and check how many levels we have for each predictor and if the levels are distributed correctly (so that we do not have incomplete information).
table(regdat$Group, regdat$WordOrder)
V2 V3
L1English 120 120
L2English 120 120
table(regdat$Group, regdat$SentenceType)
NoAdverbial SentenceAdverbial
L1English 120 120
L2English 120 120
table(regdat$ID)
Part01 Part02 Part03 Part04 Part05 Part06 Part07 Part08 Part09 Part10 Part11
24 24 24 24 24 24 24 24 24 24 24
Part12 Part13 Part14 Part15 Part16 Part17 Part18 Part19 Part20
24 24 24 24 24 24 24 24 24
table(regdat$Sentence)
Sentence01 Sentence02 Sentence03 Sentence04 Sentence05 Sentence06 Sentence07
20 20 20 20 20 20 20
Sentence08 Sentence09 Sentence10 Sentence11 Sentence12 Sentence13 Sentence14
20 20 20 20 20 20 20
Sentence15 Sentence16 Sentence17 Sentence18 Sentence19 Sentence20 Sentence21
20 20 20 20 20 20 20
Sentence22 Sentence23 Sentence24
20 20 20
We could also add a response variable (but we will do this later when we deal with post-hoc power analyses).
Generating the model
We now generate model that has per-defined parameters. We begin by specifying the parameters by setting effect sizes of the fixed effects and the intercept, the variability accounted fro by the random effects and the residuals.
# Intercept + slopes for fixed effects
# (Intercept + Group, SentenceType, WordOrder, and an interaction between Group * SentenceType)
<- c(.52, .52, .52, .52, .52)
fixed # Random intercepts for Sentence and ID
<- list(0.5, 0.1)
rand # res. variance
<- 2 res
We now generate the model and fit it to our data.
<- makeLmer(y ~ (1 | Sentence) + (1 | ID) + Group * SentenceType + WordOrder,
m1 fixef = fixed,
VarCorr = rand,
sigma = res,
data = regdat
)
# inspect
m1
Linear mixed model fit by REML ['lmerMod']
Formula: y ~ (1 | Sentence) + (1 | ID) + Group * SentenceType + WordOrder
Data: regdat
REML criterion at convergence: 2100
Random effects:
Groups Name Std.Dev.
Sentence (Intercept) 0.707
ID (Intercept) 0.316
Residual 2.000
Number of obs: 480, groups: Sentence, 24; ID, 20
Fixed Effects:
(Intercept)
0.52
GroupL2English
0.52
SentenceTypeSentenceAdverbial
0.52
WordOrderV3
0.52
GroupL2English:SentenceTypeSentenceAdverbial
0.52
Inspect summary table
::tab_model(m1) sjPlot
y | |||
---|---|---|---|
Predictors | Estimates | CI | p |
(Intercept) | 0.52 | -0.14 – 1.18 | 0.124 |
Group [L2English] | 0.52 | -0.06 – 1.10 | 0.078 |
SentenceType [SentenceAdverbial] |
0.52 | -0.24 – 1.28 | 0.180 |
WordOrder [V3] | 0.52 | -0.15 – 1.19 | 0.129 |
Group [L2English] × SentenceType [SentenceAdverbial] |
0.52 | -0.20 – 1.24 | 0.155 |
Random Effects | |||
σ2 | 4.00 | ||
τ00 Sentence | 0.50 | ||
τ00 ID | 0.10 | ||
ICC | 0.13 | ||
N Sentence | 24 | ||
N ID | 20 | ||
Observations | 480 | ||
Marginal R2 / Conditional R2 | 0.078 / 0.198 |
The summary table shows that the effects are correct but none of them are reported as being significant!
Power analysis
Let us now check if the data is sufficient to detect the main effect of WordOrder
. In the test
argument we use fcompare
which allows us to compare a model with that effect (our m1 model) to a model without that effect. Fortunately, we only need to specify the fixed effects structure.
<- simr::powerSim(m1,
sim_wo nsim = 20,
test = fcompare(y ~ Group * SentenceType)
)
Simulating: | |Simulating: |=== |Simulating: |====== |Simulating: |========= |Simulating: |============= |Simulating: |================ |Simulating: |=================== |Simulating: |======================= |Simulating: |========================== |Simulating: |============================= |Simulating: |================================= |Simulating: |==================================== |Simulating: |======================================= |Simulating: |========================================== |
Simulating: |============================================== |
Simulating: |================================================= |Simulating: |==================================================== |Simulating: |======================================================== |Simulating: |=========================================================== |
Simulating: |============================================================== |Simulating: |==================================================================|
# inspect results
sim_wo
Power for model comparison, (95% confidence interval):
35.00% (15.39, 59.22)
Test: Likelihood ratio
Comparison to y ~ Group * SentenceType + [re]
Based on 20 simulations, (0 warnings, 0 errors)
alpha = 0.05, nrow = 480
Time elapsed: 0 h 0 m 1 s
The data is not sufficient and would only detect a weak effect of WordOrder
with 7 percent accuracy!
Before we continue to see how to test the power of data to find interactions, let us briefly think about why I chose 0.52 when we specified the effect sizes.
Why did I set the estimates to 0.52?
Let us now inspect the effect sizes and see why I used 0.52 as an effect size in the model. We need to determine the odds ratios of the fixed effects and then convert them into Cohen’s d values for which we have associations between traditional denominations (small, medium, and large) and effect size values.
# extract fixed effect estimates
<- fixef(m1)
estimatesfixedeffects # convert estimates into odds ratios
exp(estimatesfixedeffects)
(Intercept)
1.682
GroupL2English
1.682
SentenceTypeSentenceAdverbial
1.682
WordOrderV3
1.682
GroupL2English:SentenceTypeSentenceAdverbial
1.682
We will now check the effect size which can be interpreted according to Chen, Cohen, and Chen (see also Cohen; Perugini, Gallucci, and Costantini, 2).
small effect (Cohen’s d 0.2, OR = 1.68)
medium effect (Cohen’s d 0.5, OR = 3.47)
strong effect (Cohen’s d 0.8, OR = 6.71)
Power analysis for interactions
Let us now check if the data set has enough power to detect a weak effect for the interaction between Group:SentenceType
.
<- simr::powerSim(m1,
sim_gst nsim = 20,
# compare model with interaction to model without interaction
test = fcompare(y ~ WordOrder + Group + SentenceType)
)
Simulating: | |Simulating: |=== |Simulating: |====== |
Simulating: |========= |Simulating: |============= |Simulating: |================ |
Simulating: |=================== |Simulating: |======================= |Simulating: |========================== |Simulating: |============================= |Simulating: |================================= |Simulating: |==================================== |Simulating: |======================================= |
Simulating: |========================================== |Simulating: |============================================== |Simulating: |================================================= |Simulating: |==================================================== |
Simulating: |======================================================== |
Simulating: |=========================================================== |Simulating: |============================================================== |Simulating: |==================================================================|
# inspect
sim_gst
Power for model comparison, (95% confidence interval):
25.00% ( 8.66, 49.10)
Test: Likelihood ratio
Comparison to y ~ WordOrder + Group + SentenceType + [re]
Based on 20 simulations, (0 warnings, 0 errors)
alpha = 0.05, nrow = 480
Time elapsed: 0 h 0 m 1 s
The data is not sufficient to detect a weak effect of Group:SentenceType
with 5 percent accuracy!
NOTE
Oh no… What can we do?
Extending Data
We will now extend the data to see what sample size is needed to get to the 80 percent accuracy threshold. We begin by increasing the number of sentences from 10 to 30 to see if this would lead to a sufficient sample size. After increasing the number of sentences, we will extend the data to see how many participants we would need.
Adding sentences
<- simr::extend(m1,
m1_as along = "Sentence",
n = 120
)# inspect model
m1_as
Linear mixed model fit by REML ['lmerMod']
Formula: y ~ (1 | Sentence) + (1 | ID) + Group * SentenceType + WordOrder
Data: regdat
REML criterion at convergence: 2100
Random effects:
Groups Name Std.Dev.
Sentence (Intercept) 0.707
ID (Intercept) 0.316
Residual 2.000
Number of obs: 480, groups: Sentence, 24; ID, 20
Fixed Effects:
(Intercept)
0.52
GroupL2English
0.52
SentenceTypeSentenceAdverbial
0.52
WordOrderV3
0.52
GroupL2English:SentenceTypeSentenceAdverbial
0.52
Check power when using 120 sentences
<- powerSim(m1_as,
sim_m1_as_gst nsim = 20,
test = fcompare(y ~ WordOrder + Group + SentenceType)
)
Simulating: | |Simulating: |=== |Simulating: |====== |Simulating: |========= |Simulating: |============= |Simulating: |================ |Simulating: |=================== |Simulating: |======================= |Simulating: |========================== |Simulating: |============================= |Simulating: |================================= |Simulating: |==================================== |Simulating: |======================================= |Simulating: |========================================== |Simulating: |============================================== |Simulating: |================================================= |Simulating: |==================================================== |Simulating: |======================================================== |Simulating: |=========================================================== |Simulating: |============================================================== |Simulating: |==================================================================|
# inspect
sim_m1_as_gst
Power for model comparison, (95% confidence interval):
90.00% (68.30, 98.77)
Test: Likelihood ratio
Comparison to y ~ WordOrder + Group + SentenceType + [re]
Based on 20 simulations, (0 warnings, 0 errors)
alpha = 0.05, nrow = 2400
Time elapsed: 0 h 0 m 3 s
The data with 120 sentences is sufficient and would detect a weak effect of Group:SentenceType
with 18 percent accuracy!
Let us now plot a power curve to see where we cross the 80 percent threshold.
<- simr::powerCurve(m1_as,
pcurve_m1_as_gst test = fcompare(y ~ WordOrder + Group + SentenceType),
along = "Sentence",
nsim = 20,
breaks = seq(20, 120, 20)
)# show plot
plot(pcurve_m1_as_gst)
Using more items (or in this case sentences) is rather easy but it can make experiments longer which may lead to participants becoming tired or annoyed. An alternative would be to use more participants. Let us thus check how we can determine how many subjects we would need to reach a power of at least 80 percent.
Checking participants
What about increasing the number of participants?
We increase the number of participants to 120.
<- simr::extend(m1,
m1_ap along = "ID",
n = 120
)# inspect model
m1_ap
Linear mixed model fit by REML ['lmerMod']
Formula: y ~ (1 | Sentence) + (1 | ID) + Group * SentenceType + WordOrder
Data: regdat
REML criterion at convergence: 2100
Random effects:
Groups Name Std.Dev.
Sentence (Intercept) 0.707
ID (Intercept) 0.316
Residual 2.000
Number of obs: 480, groups: Sentence, 24; ID, 20
Fixed Effects:
(Intercept)
0.52
GroupL2English
0.52
SentenceTypeSentenceAdverbial
0.52
WordOrderV3
0.52
GroupL2English:SentenceTypeSentenceAdverbial
0.52
Check power when using 120 participants
<- powerSim(m1_ap,
sim_m1_ap_gst nsim = 20,
test = fcompare(y ~ WordOrder + Group + SentenceType)
)
Simulating: | |Simulating: |=== |Simulating: |====== |Simulating: |========= |Simulating: |============= |Simulating: |================ |Simulating: |=================== |Simulating: |======================= |Simulating: |========================== |Simulating: |============================= |Simulating: |================================= |Simulating: |==================================== |Simulating: |======================================= |Simulating: |========================================== |Simulating: |============================================== |Simulating: |================================================= |Simulating: |==================================================== |Simulating: |======================================================== |Simulating: |=========================================================== |Simulating: |============================================================== |Simulating: |==================================================================|
# inspect
sim_m1_ap_gst
Power for model comparison, (95% confidence interval):
80.00% (56.34, 94.27)
Test: Likelihood ratio
Comparison to y ~ WordOrder + Group + SentenceType + [re]
Based on 20 simulations, (0 warnings, 0 errors)
alpha = 0.05, nrow = 2880
Time elapsed: 0 h 0 m 3 s
The data with 120 participants is sufficient and would detect a weak effect of Group * SentenceType
with 16 percent accuracy
<- simr::powerCurve(m1_ap,
pcurve_m1_ap_gst test = fcompare(y ~ Group + SentenceType + WordOrder),
along = "ID",
nsim = 20
)
Calculating power at 10 sample sizes along ID
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
boundary (singular) fit: see help('isSingular')
# show plot
plot(pcurve_m1_ap_gst)
Generating data and a glmer
In order to perform a power analysis, we will start by loading the tidyverse package to process the data and by generating a data that we will use to determine the power of a regression model.
This simulated data set has
- 200 data points
- 2 Conditions (Control, Test)
- 10 Subjects
- 10 Items
# generate data
<- data.frame(
simdat <- rep(paste0("Sub", 1:10), each = 20),
sub <- rep(
cond c(
rep("Control", 10),
rep("Test", 10)
),10
),<- as.character(rep(1:10, 20))
itm %>%
) ::rename(
dplyrSubject = 1,
Condition = 2,
Item = 3
%>%
) ::mutate_if(is.character, factor)
dplyr# inspect
head(simdat, 15)
Subject Condition Item
1 Sub1 Control 1
2 Sub1 Control 2
3 Sub1 Control 3
4 Sub1 Control 4
5 Sub1 Control 5
6 Sub1 Control 6
7 Sub1 Control 7
8 Sub1 Control 8
9 Sub1 Control 9
10 Sub1 Control 10
11 Sub1 Test 1
12 Sub1 Test 2
13 Sub1 Test 3
14 Sub1 Test 4
15 Sub1 Test 5
EXERCISE TIME!
`
- Can you create a data set with 5 Subjects, 5 Items (each) for 4 Conditions?
Answer
Subjects Items Condition
1 Subject1 Item1 Condition1
2 Subject1 Item2 Condition1
3 Subject1 Item3 Condition1
4 Subject1 Item4 Condition1
5 Subject1 Item5 Condition1
6 Subject1 Item1 Condition2
7 Subject1 Item2 Condition2
8 Subject1 Item3 Condition2
9 Subject1 Item4 Condition2
10 Subject1 Item5 Condition2
11 Subject1 Item1 Condition3
12 Subject1 Item2 Condition3
13 Subject1 Item3 Condition3
14 Subject1 Item4 Condition3
15 Subject1 Item5 Condition3
16 Subject1 Item1 Condition4
17 Subject1 Item2 Condition4
18 Subject1 Item3 Condition4
19 Subject1 Item4 Condition4
20 Subject1 Item5 Condition4
Generating the model
As before with the lmer, we specify the model parameters - but when generating glmers, we only need to specify the effects for the fixed effects and the intercept and define the variability in the random effects (we do not need to specify the residuals).
# Intercept + slopes for fixed effects
# (Group, SentenceType, WordOrder, and an interaction between Group * SentenceType)
<- c(.52, .52)
fixed # Random intercepts for Sentence and ID
<- list(0.5, 0.1) rand
We now generate the model and fit it to the data.
<- simr::makeGlmer(y ~ (1 | Subject) + (1 | Item) + Condition,
m2 family = "binomial",
fixef = fixed,
VarCorr = rand,
data = simdat
)# inspect
::tab_model(m2) sjPlot
y | |||
---|---|---|---|
Predictors | Odds Ratios | CI | p |
(Intercept) | 1.68 | 0.89 – 3.17 | 0.108 |
Condition [Test] | 1.68 | 0.94 – 3.02 | 0.081 |
Random Effects | |||
σ2 | 3.29 | ||
τ00 Subject | 0.50 | ||
τ00 Item | 0.10 | ||
ICC | 0.15 | ||
N Subject | 10 | ||
N Item | 10 | ||
Observations | 200 | ||
Marginal R2 / Conditional R2 | 0.017 / 0.169 |
Next, we extract power. In this case, we use fixed
in the test
argument which allows us to test a specific predictor.
# set seed for replicability
set.seed(12345)
# perform power analysis for present model
<- powerSim(m2, fixed("ConditionTest", "z"),
rsim_m2_c nsim = 20
)
Simulating: | |Simulating: |=== |
Simulating: |====== |
Simulating: |========= |Simulating: |============= |
Simulating: |================ |
Simulating: |=================== |
Simulating: |======================= |Simulating: |========================== |Simulating: |============================= |Simulating: |================================= |
Simulating: |==================================== |Simulating: |======================================= |
Simulating: |========================================== |Simulating: |============================================== |Simulating: |================================================= |Simulating: |==================================================== |Simulating: |======================================================== |
Simulating: |=========================================================== |Simulating: |============================================================== |Simulating: |==================================================================|
# inspect
rsim_m2_c
Power for predictor 'ConditionTest', (95% confidence interval):
35.00% (15.39, 59.22)
Test: z-test
Effect size for ConditionTest is 0.52
Based on 20 simulations, (0 warnings, 0 errors)
alpha = 0.05, nrow = 200
Time elapsed: 0 h 0 m 1 s
The data is sufficient and would detect a weak effect of ConditionTest
with only 7 percent accuracy
Extending the data
Like before, we can now extend the data to see how many participants or items we would need to reach the 80 percent confidence threshold.
Adding Items
We start of by increasing the number of items from 10 to 40. This means that our new data/model has the following characteristics.
2 Conditions
10 Subjects
40 Items (from 10)
Keep in mind though that when extending the data/model in this way, each combination occurs only once!
<- simr::extend(m2,
m2_ai along = "Item",
n = 40
)
Next, we plot the power curve.
<- powerCurve(m2_ai,
pcurve_m2_ai_c fixed("ConditionTest", "z"),
along = "Item",
nsim = 20,
breaks = seq(10, 40, 5)
)
Calculating power at 7 sample sizes along Item
Simulating: | |Simulating: |=== |Simulating: |====== |Simulating: |========= |Simulating: |============= |Simulating: |================ |Simulating: |=================== |Simulating: |======================= |Simulating: |========================== |Simulating: |============================= |Simulating: |================================= |Simulating: |==================================== |Simulating: |======================================= |Simulating: |========================================== |Simulating: |============================================== |Simulating: |================================================= |Simulating: |==================================================== |Simulating: |======================================================== |Simulating: |=========================================================== |Simulating: |============================================================== |Simulating: |==================================================================|(1/7) (1/7) Simulating: | |(1/7) Simulating: |=== |(1/7) Simulating: |====== |(1/7) Simulating: |========= |(1/7) Simulating: |============ |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |=============== |(1/7) Simulating: |================== |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |===================== |(1/7) Simulating: |======================== |(1/7) Simulating: |=========================== |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |============================== |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |================================= |(1/7) Simulating: |==================================== |(1/7) Simulating: |======================================= |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |========================================== |(1/7) Simulating: |============================================= |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |================================================ |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |=================================================== |(1/7) Simulating: |====================================================== |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |========================================================= |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |============================================================|(1/7) (2/7) (2/7) Simulating: | |(2/7) Simulating: |=== |(2/7) Simulating: |====== |(2/7) Simulating: |========= |(2/7) Simulating: |============ |
boundary (singular) fit: see help('isSingular')
(2/7) Simulating: |=============== |(2/7) Simulating: |================== |
boundary (singular) fit: see help('isSingular')
(2/7) Simulating: |===================== |(2/7) Simulating: |======================== |(2/7) Simulating: |=========================== |
boundary (singular) fit: see help('isSingular')
(2/7) Simulating: |============================== |
boundary (singular) fit: see help('isSingular')
(2/7) Simulating: |================================= |(2/7) Simulating: |==================================== |
boundary (singular) fit: see help('isSingular')
(2/7) Simulating: |======================================= |
boundary (singular) fit: see help('isSingular')
(2/7) Simulating: |========================================== |(2/7) Simulating: |============================================= |
boundary (singular) fit: see help('isSingular')
(2/7) Simulating: |================================================ |(2/7) Simulating: |=================================================== |(2/7) Simulating: |====================================================== |(2/7) Simulating: |========================================================= |
boundary (singular) fit: see help('isSingular')
(2/7) Simulating: |============================================================|(2/7) (3/7) (3/7) Simulating: | |(3/7) Simulating: |=== |(3/7) Simulating: |====== |(3/7) Simulating: |========= |(3/7) Simulating: |============ |
boundary (singular) fit: see help('isSingular')
(3/7) Simulating: |=============== |
boundary (singular) fit: see help('isSingular')
(3/7) Simulating: |================== |(3/7) Simulating: |===================== |(3/7) Simulating: |======================== |(3/7) Simulating: |=========================== |(3/7) Simulating: |============================== |(3/7) Simulating: |================================= |(3/7) Simulating: |==================================== |(3/7) Simulating: |======================================= |
boundary (singular) fit: see help('isSingular')
(3/7) Simulating: |========================================== |(3/7) Simulating: |============================================= |(3/7) Simulating: |================================================ |(3/7) Simulating: |=================================================== |(3/7) Simulating: |====================================================== |(3/7) Simulating: |========================================================= |(3/7) Simulating: |============================================================|(3/7) (4/7) (4/7) Simulating: | |(4/7) Simulating: |=== |(4/7) Simulating: |====== |(4/7) Simulating: |========= |(4/7) Simulating: |============ |
boundary (singular) fit: see help('isSingular')
(4/7) Simulating: |=============== |
boundary (singular) fit: see help('isSingular')
(4/7) Simulating: |================== |(4/7) Simulating: |===================== |(4/7) Simulating: |======================== |(4/7) Simulating: |=========================== |(4/7) Simulating: |============================== |(4/7) Simulating: |================================= |(4/7) Simulating: |==================================== |(4/7) Simulating: |======================================= |(4/7) Simulating: |========================================== |(4/7) Simulating: |============================================= |(4/7) Simulating: |================================================ |(4/7) Simulating: |=================================================== |(4/7) Simulating: |====================================================== |(4/7) Simulating: |========================================================= |(4/7) Simulating: |============================================================|(4/7) (5/7) (5/7) Simulating: | |(5/7) Simulating: |=== |(5/7) Simulating: |====== |(5/7) Simulating: |========= |(5/7) Simulating: |============ |
boundary (singular) fit: see help('isSingular')
(5/7) Simulating: |=============== |
boundary (singular) fit: see help('isSingular')
(5/7) Simulating: |================== |(5/7) Simulating: |===================== |(5/7) Simulating: |======================== |(5/7) Simulating: |=========================== |(5/7) Simulating: |============================== |(5/7) Simulating: |================================= |(5/7) Simulating: |==================================== |(5/7) Simulating: |======================================= |(5/7) Simulating: |========================================== |(5/7) Simulating: |============================================= |(5/7) Simulating: |================================================ |(5/7) Simulating: |=================================================== |(5/7) Simulating: |====================================================== |(5/7) Simulating: |========================================================= |(5/7) Simulating: |============================================================|(5/7) (6/7) (6/7) Simulating: | |(6/7) Simulating: |=== |(6/7) Simulating: |====== |(6/7) Simulating: |========= |(6/7) Simulating: |============ |
boundary (singular) fit: see help('isSingular')
(6/7) Simulating: |=============== |(6/7) Simulating: |================== |(6/7) Simulating: |===================== |(6/7) Simulating: |======================== |(6/7) Simulating: |=========================== |(6/7) Simulating: |============================== |(6/7) Simulating: |================================= |(6/7) Simulating: |==================================== |(6/7) Simulating: |======================================= |(6/7) Simulating: |========================================== |(6/7) Simulating: |============================================= |(6/7) Simulating: |================================================ |(6/7) Simulating: |=================================================== |(6/7) Simulating: |====================================================== |(6/7) Simulating: |========================================================= |(6/7) Simulating: |============================================================|(6/7) (7/7) (7/7) Simulating: | |(7/7) Simulating: |=== |(7/7) Simulating: |====== |(7/7) Simulating: |========= |(7/7) Simulating: |============ |(7/7) Simulating: |=============== |
boundary (singular) fit: see help('isSingular')
(7/7) Simulating: |================== |(7/7) Simulating: |===================== |(7/7) Simulating: |======================== |(7/7) Simulating: |=========================== |(7/7) Simulating: |============================== |(7/7) Simulating: |================================= |(7/7) Simulating: |==================================== |(7/7) Simulating: |======================================= |(7/7) Simulating: |========================================== |(7/7) Simulating: |============================================= |(7/7) Simulating: |================================================ |(7/7) Simulating: |=================================================== |(7/7) Simulating: |====================================================== |(7/7) Simulating: |========================================================= |(7/7) Simulating: |============================================================|(7/7)
# show plot
plot(pcurve_m2_ai_c)
The power curve shows that we breach the 80 percent threshold with about 35 items.
Adding subjects
An alternative to adding items is, of course, to use more subjects or participants. We thus continue by increasing the number of participants from 10 to 40. This means that our new data/model has the following characteristics.
2 Conditions
10 Items
40 Subjects (from 10)
Again, keep in mind though that when extending the data/model in this way, each combination occurs only once!
<- simr::extend(m2,
m2_as along = "Subject",
n = 40
)# inspect model
::tab_model(m2_as) sjPlot
y | |||
---|---|---|---|
Predictors | Odds Ratios | CI | p |
(Intercept) | 1.68 | 0.89 – 3.17 | 0.108 |
Condition [Test] | 1.68 | 0.94 – 3.02 | 0.081 |
Random Effects | |||
σ2 | 3.29 | ||
τ00 Subject | 0.50 | ||
τ00 Item | 0.10 | ||
ICC | 0.15 | ||
N Subject | 10 | ||
N Item | 10 | ||
Observations | 200 | ||
Marginal R2 / Conditional R2 | 0.017 / 0.169 |
As before, we plot power curve.
<- powerCurve(m2_as,
pcurve_m2_as_c fixed("ConditionTest", "z"),
along = "Subject",
nsim = 20,
breaks = seq(10, 40, 5)
)
Calculating power at 7 sample sizes along Subject
Simulating: | |Simulating: |=== |Simulating: |====== |Simulating: |========= |Simulating: |============= |Simulating: |================ |Simulating: |=================== |Simulating: |======================= |Simulating: |========================== |Simulating: |============================= |Simulating: |================================= |Simulating: |==================================== |Simulating: |======================================= |Simulating: |========================================== |Simulating: |============================================== |Simulating: |================================================= |Simulating: |==================================================== |Simulating: |======================================================== |Simulating: |=========================================================== |Simulating: |============================================================== |Simulating: |==================================================================|(1/7) (1/7) Simulating: | |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |=== |(1/7) Simulating: |====== |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |========= |(1/7) Simulating: |============ |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |=============== |(1/7) Simulating: |================== |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |===================== |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |======================== |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |=========================== |(1/7) Simulating: |============================== |(1/7) Simulating: |================================= |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |==================================== |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |======================================= |(1/7) Simulating: |========================================== |(1/7) Simulating: |============================================= |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |================================================ |(1/7) Simulating: |=================================================== |(1/7) Simulating: |====================================================== |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |========================================================= |
boundary (singular) fit: see help('isSingular')
(1/7) Simulating: |============================================================|(1/7) (2/7) (2/7) Simulating: | |
boundary (singular) fit: see help('isSingular')
(2/7) Simulating: |=== |(2/7) Simulating: |====== |
boundary (singular) fit: see help('isSingular')
(2/7) Simulating: |========= |(2/7) Simulating: |============ |
boundary (singular) fit: see help('isSingular')
(2/7) Simulating: |=============== |(2/7) Simulating: |================== |
boundary (singular) fit: see help('isSingular')
(2/7) Simulating: |===================== |(2/7) Simulating: |======================== |(2/7) Simulating: |=========================== |(2/7) Simulating: |============================== |(2/7) Simulating: |================================= |
boundary (singular) fit: see help('isSingular')
(2/7) Simulating: |==================================== |
boundary (singular) fit: see help('isSingular')
(2/7) Simulating: |======================================= |(2/7) Simulating: |========================================== |(2/7) Simulating: |============================================= |
boundary (singular) fit: see help('isSingular')
(2/7) Simulating: |================================================ |(2/7) Simulating: |=================================================== |(2/7) Simulating: |====================================================== |
boundary (singular) fit: see help('isSingular')
(2/7) Simulating: |========================================================= |(2/7) Simulating: |============================================================|(2/7) (3/7) (3/7) Simulating: | |
boundary (singular) fit: see help('isSingular')
(3/7) Simulating: |=== |(3/7) Simulating: |====== |(3/7) Simulating: |========= |(3/7) Simulating: |============ |(3/7) Simulating: |=============== |(3/7) Simulating: |================== |(3/7) Simulating: |===================== |(3/7) Simulating: |======================== |(3/7) Simulating: |=========================== |(3/7) Simulating: |============================== |(3/7) Simulating: |================================= |(3/7) Simulating: |==================================== |(3/7) Simulating: |======================================= |(3/7) Simulating: |========================================== |(3/7) Simulating: |============================================= |
boundary (singular) fit: see help('isSingular')
(3/7) Simulating: |================================================ |(3/7) Simulating: |=================================================== |(3/7) Simulating: |====================================================== |(3/7) Simulating: |========================================================= |
boundary (singular) fit: see help('isSingular')
(3/7) Simulating: |============================================================|(3/7) (4/7) (4/7) Simulating: | |(4/7) Simulating: |=== |(4/7) Simulating: |====== |(4/7) Simulating: |========= |(4/7) Simulating: |============ |(4/7) Simulating: |=============== |(4/7) Simulating: |================== |(4/7) Simulating: |===================== |(4/7) Simulating: |======================== |(4/7) Simulating: |=========================== |(4/7) Simulating: |============================== |(4/7) Simulating: |================================= |(4/7) Simulating: |==================================== |(4/7) Simulating: |======================================= |(4/7) Simulating: |========================================== |(4/7) Simulating: |============================================= |(4/7) Simulating: |================================================ |(4/7) Simulating: |=================================================== |(4/7) Simulating: |====================================================== |(4/7) Simulating: |========================================================= |(4/7) Simulating: |============================================================|(4/7) (5/7) (5/7) Simulating: | |
boundary (singular) fit: see help('isSingular')
(5/7) Simulating: |=== |(5/7) Simulating: |====== |
boundary (singular) fit: see help('isSingular')
(5/7) Simulating: |========= |(5/7) Simulating: |============ |(5/7) Simulating: |=============== |(5/7) Simulating: |================== |(5/7) Simulating: |===================== |(5/7) Simulating: |======================== |(5/7) Simulating: |=========================== |(5/7) Simulating: |============================== |(5/7) Simulating: |================================= |(5/7) Simulating: |==================================== |(5/7) Simulating: |======================================= |(5/7) Simulating: |========================================== |(5/7) Simulating: |============================================= |(5/7) Simulating: |================================================ |(5/7) Simulating: |=================================================== |(5/7) Simulating: |====================================================== |(5/7) Simulating: |========================================================= |(5/7) Simulating: |============================================================|(5/7) (6/7) (6/7) Simulating: | |(6/7) Simulating: |=== |(6/7) Simulating: |====== |(6/7) Simulating: |========= |(6/7) Simulating: |============ |(6/7) Simulating: |=============== |(6/7) Simulating: |================== |(6/7) Simulating: |===================== |(6/7) Simulating: |======================== |(6/7) Simulating: |=========================== |(6/7) Simulating: |============================== |(6/7) Simulating: |================================= |(6/7) Simulating: |==================================== |(6/7) Simulating: |======================================= |(6/7) Simulating: |========================================== |(6/7) Simulating: |============================================= |(6/7) Simulating: |================================================ |(6/7) Simulating: |=================================================== |(6/7) Simulating: |====================================================== |(6/7) Simulating: |========================================================= |(6/7) Simulating: |============================================================|(6/7) (7/7) (7/7) Simulating: | |(7/7) Simulating: |=== |(7/7) Simulating: |====== |(7/7) Simulating: |========= |(7/7) Simulating: |============ |(7/7) Simulating: |=============== |(7/7) Simulating: |================== |(7/7) Simulating: |===================== |(7/7) Simulating: |======================== |(7/7) Simulating: |=========================== |(7/7) Simulating: |============================== |(7/7) Simulating: |================================= |(7/7) Simulating: |==================================== |(7/7) Simulating: |======================================= |(7/7) Simulating: |========================================== |(7/7) Simulating: |============================================= |(7/7) Simulating: |================================================ |(7/7) Simulating: |=================================================== |(7/7) Simulating: |====================================================== |(7/7) Simulating: |========================================================= |(7/7) Simulating: |============================================================|(7/7)
# show plot
plot(pcurve_m2_as_c)
How often does each combination occur?
Only once!
So, what if we increase the number of combinations (this is particularly important when using a *repeated measures** design)? Below, we increase the number of configuration from 1 to 10 so that each item is shown 10 times to the same participant. This means that our new data/model has the following characteristics.
2 Conditions
10 Items
10 Subjects
Now each combination of item and subject occurs 10 times!
<- extend(m2,
m2_asi_c within = "Subject+Item",
n = 10
)# perform power calculation
<- powerCurve(m2_asi_c,
pcurve_m2_asi_c fixed("ConditionTest", "z"),
within = "Subject+Item",
nsim = 20,
breaks = seq(2, 10, 2)
)
Calculating power at 5 sample sizes within Subject+Item
Simulating: | |Simulating: |=== |Simulating: |====== |Simulating: |========= |Simulating: |============= |Simulating: |================ |Simulating: |=================== |Simulating: |======================= |Simulating: |========================== |Simulating: |============================= |Simulating: |================================= |Simulating: |==================================== |Simulating: |======================================= |Simulating: |========================================== |Simulating: |============================================== |Simulating: |================================================= |Simulating: |==================================================== |Simulating: |======================================================== |Simulating: |=========================================================== |Simulating: |============================================================== |Simulating: |==================================================================|(1/5) (1/5) Simulating: | |(1/5) Simulating: |=== |(1/5) Simulating: |====== |(1/5) Simulating: |========= |(1/5) Simulating: |============ |(1/5) Simulating: |=============== |(1/5) Simulating: |================== |(1/5) Simulating: |===================== |(1/5) Simulating: |======================== |(1/5) Simulating: |=========================== |(1/5) Simulating: |============================== |(1/5) Simulating: |================================= |(1/5) Simulating: |==================================== |(1/5) Simulating: |======================================= |(1/5) Simulating: |========================================== |
boundary (singular) fit: see help('isSingular')
(1/5) Simulating: |============================================= |
boundary (singular) fit: see help('isSingular')
(1/5) Simulating: |================================================ |
boundary (singular) fit: see help('isSingular')
(1/5) Simulating: |=================================================== |
boundary (singular) fit: see help('isSingular')
(1/5) Simulating: |====================================================== |(1/5) Simulating: |========================================================= |(1/5) Simulating: |============================================================|(1/5) (2/5) (2/5) Simulating: | |(2/5) Simulating: |=== |(2/5) Simulating: |====== |(2/5) Simulating: |========= |
boundary (singular) fit: see help('isSingular')
(2/5) Simulating: |============ |(2/5) Simulating: |=============== |(2/5) Simulating: |================== |(2/5) Simulating: |===================== |(2/5) Simulating: |======================== |(2/5) Simulating: |=========================== |(2/5) Simulating: |============================== |(2/5) Simulating: |================================= |
boundary (singular) fit: see help('isSingular')
(2/5) Simulating: |==================================== |
boundary (singular) fit: see help('isSingular')
(2/5) Simulating: |======================================= |
boundary (singular) fit: see help('isSingular')
(2/5) Simulating: |========================================== |(2/5) Simulating: |============================================= |(2/5) Simulating: |================================================ |(2/5) Simulating: |=================================================== |
boundary (singular) fit: see help('isSingular')
(2/5) Simulating: |====================================================== |(2/5) Simulating: |========================================================= |(2/5) Simulating: |============================================================|(2/5) (3/5) (3/5) Simulating: | |(3/5) Simulating: |=== |(3/5) Simulating: |====== |(3/5) Simulating: |========= |(3/5) Simulating: |============ |(3/5) Simulating: |=============== |(3/5) Simulating: |================== |(3/5) Simulating: |===================== |(3/5) Simulating: |======================== |(3/5) Simulating: |=========================== |(3/5) Simulating: |============================== |(3/5) Simulating: |================================= |
boundary (singular) fit: see help('isSingular')
(3/5) Simulating: |==================================== |(3/5) Simulating: |======================================= |(3/5) Simulating: |========================================== |(3/5) Simulating: |============================================= |(3/5) Simulating: |================================================ |(3/5) Simulating: |=================================================== |
boundary (singular) fit: see help('isSingular')
(3/5) Simulating: |====================================================== |(3/5) Simulating: |========================================================= |(3/5) Simulating: |============================================================|(3/5) (4/5) (4/5) Simulating: | |(4/5) Simulating: |=== |(4/5) Simulating: |====== |(4/5) Simulating: |========= |(4/5) Simulating: |============ |(4/5) Simulating: |=============== |(4/5) Simulating: |================== |(4/5) Simulating: |===================== |(4/5) Simulating: |======================== |(4/5) Simulating: |=========================== |(4/5) Simulating: |============================== |(4/5) Simulating: |================================= |
boundary (singular) fit: see help('isSingular')
(4/5) Simulating: |==================================== |(4/5) Simulating: |======================================= |(4/5) Simulating: |========================================== |(4/5) Simulating: |============================================= |(4/5) Simulating: |================================================ |(4/5) Simulating: |=================================================== |
boundary (singular) fit: see help('isSingular')
(4/5) Simulating: |====================================================== |(4/5) Simulating: |========================================================= |(4/5) Simulating: |============================================================|(4/5) (5/5) (5/5) Simulating: | |(5/5) Simulating: |=== |(5/5) Simulating: |====== |(5/5) Simulating: |========= |(5/5) Simulating: |============ |(5/5) Simulating: |=============== |(5/5) Simulating: |================== |(5/5) Simulating: |===================== |(5/5) Simulating: |======================== |(5/5) Simulating: |=========================== |(5/5) Simulating: |============================== |(5/5) Simulating: |================================= |(5/5) Simulating: |==================================== |(5/5) Simulating: |======================================= |(5/5) Simulating: |========================================== |(5/5) Simulating: |============================================= |(5/5) Simulating: |================================================ |(5/5) Simulating: |=================================================== |(5/5) Simulating: |====================================================== |(5/5) Simulating: |========================================================= |(5/5) Simulating: |============================================================|(5/5)
# show plot
plot(pcurve_m2_asi_c)
If we did this, then even 5 subjects may be enough to reach the 80 percent threshold.
Adding subjects and items
We can also add subjects and items simultaneously to address questions like How many subjects would I need if I had 30 items?. Hence, we increase both subjects and items from 10 to 30. This means that our new data/model has the following characteristics.
2 Conditions
30 Items (from 10)
30 Subjects (from 10)
In this case, we return to each combination only occurring once.
<- simr::extend(m2,
m2_as along = "Subject",
n = 30
)<- simr::extend(m2_as,
m2_asi along = "Item",
n = 30
)# inspect model
::tab_model(m2_asi) sjPlot
y | |||
---|---|---|---|
Predictors | Odds Ratios | CI | p |
(Intercept) | 1.68 | 0.89 – 3.17 | 0.108 |
Condition [Test] | 1.68 | 0.94 – 3.02 | 0.081 |
Random Effects | |||
σ2 | 3.29 | ||
τ00 Subject | 0.50 | ||
τ00 Item | 0.10 | ||
ICC | 0.15 | ||
N Subject | 10 | ||
N Item | 10 | ||
Observations | 200 | ||
Marginal R2 / Conditional R2 | 0.017 / 0.169 |
We can now plot power curve to answer the question How many subjects do you need if you have 30 Items?.
<- powerCurve(m2_asi,
pcurve_m2_asi_c fixed("ConditionTest", "z"),
along = "Subject",
breaks = seq(5, 30, 5),
nsim = 20
)
Calculating power at 6 sample sizes along Subject
Simulating: | |Simulating: |=== |Simulating: |====== |Simulating: |========= |Simulating: |============= |Simulating: |================ |Simulating: |=================== |Simulating: |======================= |Simulating: |========================== |Simulating: |============================= |Simulating: |================================= |Simulating: |==================================== |Simulating: |======================================= |Simulating: |========================================== |Simulating: |============================================== |Simulating: |================================================= |Simulating: |==================================================== |Simulating: |======================================================== |Simulating: |=========================================================== |Simulating: |============================================================== |Simulating: |==================================================================|(1/6) (1/6) Simulating: | |
boundary (singular) fit: see help('isSingular')
(1/6) Simulating: |=== |(1/6) Simulating: |====== |(1/6) Simulating: |========= |(1/6) Simulating: |============ |(1/6) Simulating: |=============== |(1/6) Simulating: |================== |(1/6) Simulating: |===================== |(1/6) Simulating: |======================== |(1/6) Simulating: |=========================== |(1/6) Simulating: |============================== |(1/6) Simulating: |================================= |(1/6) Simulating: |==================================== |
boundary (singular) fit: see help('isSingular')
(1/6) Simulating: |======================================= |
boundary (singular) fit: see help('isSingular')
(1/6) Simulating: |========================================== |(1/6) Simulating: |============================================= |(1/6) Simulating: |================================================ |
boundary (singular) fit: see help('isSingular')
(1/6) Simulating: |=================================================== |
boundary (singular) fit: see help('isSingular')
(1/6) Simulating: |====================================================== |
boundary (singular) fit: see help('isSingular')
(1/6) Simulating: |========================================================= |(1/6) Simulating: |============================================================|(1/6) (2/6) (2/6) Simulating: | |(2/6) Simulating: |=== |(2/6) Simulating: |====== |(2/6) Simulating: |========= |
boundary (singular) fit: see help('isSingular')
(2/6) Simulating: |============ |(2/6) Simulating: |=============== |(2/6) Simulating: |================== |(2/6) Simulating: |===================== |(2/6) Simulating: |======================== |(2/6) Simulating: |=========================== |(2/6) Simulating: |============================== |(2/6) Simulating: |================================= |(2/6) Simulating: |==================================== |(2/6) Simulating: |======================================= |
boundary (singular) fit: see help('isSingular')
(2/6) Simulating: |========================================== |(2/6) Simulating: |============================================= |(2/6) Simulating: |================================================ |(2/6) Simulating: |=================================================== |
boundary (singular) fit: see help('isSingular')
(2/6) Simulating: |====================================================== |(2/6) Simulating: |========================================================= |(2/6) Simulating: |============================================================|(2/6) (3/6) (3/6) Simulating: | |(3/6) Simulating: |=== |(3/6) Simulating: |====== |(3/6) Simulating: |========= |(3/6) Simulating: |============ |(3/6) Simulating: |=============== |(3/6) Simulating: |================== |(3/6) Simulating: |===================== |(3/6) Simulating: |======================== |(3/6) Simulating: |=========================== |(3/6) Simulating: |============================== |(3/6) Simulating: |================================= |(3/6) Simulating: |==================================== |(3/6) Simulating: |======================================= |(3/6) Simulating: |========================================== |(3/6) Simulating: |============================================= |(3/6) Simulating: |================================================ |(3/6) Simulating: |=================================================== |(3/6) Simulating: |====================================================== |(3/6) Simulating: |========================================================= |(3/6) Simulating: |============================================================|(3/6) (4/6) (4/6) Simulating: | |(4/6) Simulating: |=== |(4/6) Simulating: |====== |(4/6) Simulating: |========= |(4/6) Simulating: |============ |(4/6) Simulating: |=============== |(4/6) Simulating: |================== |(4/6) Simulating: |===================== |(4/6) Simulating: |======================== |(4/6) Simulating: |=========================== |(4/6) Simulating: |============================== |(4/6) Simulating: |================================= |(4/6) Simulating: |==================================== |(4/6) Simulating: |======================================= |(4/6) Simulating: |========================================== |(4/6) Simulating: |============================================= |(4/6) Simulating: |================================================ |(4/6) Simulating: |=================================================== |(4/6) Simulating: |====================================================== |(4/6) Simulating: |========================================================= |(4/6) Simulating: |============================================================|(4/6) (5/6) (5/6) Simulating: | |(5/6) Simulating: |=== |(5/6) Simulating: |====== |(5/6) Simulating: |========= |(5/6) Simulating: |============ |(5/6) Simulating: |=============== |(5/6) Simulating: |================== |(5/6) Simulating: |===================== |(5/6) Simulating: |======================== |(5/6) Simulating: |=========================== |(5/6) Simulating: |============================== |(5/6) Simulating: |================================= |(5/6) Simulating: |==================================== |(5/6) Simulating: |======================================= |(5/6) Simulating: |========================================== |(5/6) Simulating: |============================================= |(5/6) Simulating: |================================================ |(5/6) Simulating: |=================================================== |(5/6) Simulating: |====================================================== |(5/6) Simulating: |========================================================= |(5/6) Simulating: |============================================================|(5/6) (6/6) (6/6) Simulating: | |(6/6) Simulating: |=== |(6/6) Simulating: |====== |(6/6) Simulating: |========= |(6/6) Simulating: |============ |(6/6) Simulating: |=============== |(6/6) Simulating: |================== |(6/6) Simulating: |===================== |(6/6) Simulating: |======================== |(6/6) Simulating: |=========================== |(6/6) Simulating: |============================== |(6/6) Simulating: |================================= |(6/6) Simulating: |==================================== |(6/6) Simulating: |======================================= |(6/6) Simulating: |========================================== |(6/6) Simulating: |============================================= |(6/6) Simulating: |================================================ |(6/6) Simulating: |=================================================== |(6/6) Simulating: |====================================================== |(6/6) Simulating: |========================================================= |(6/6) Simulating: |============================================================|(6/6)
# show plot
plot(pcurve_m2_asi_c)
We can see that with 30 items, we would need only about 15 subjects to reach the 80 percent threshold.
We can also check the results in tabular form as shown below.
The results are shown below.
# print results
print(pcurve_m2_asi_c)
Power for predictor 'ConditionTest', (95% confidence interval),
by number of levels in Subject:
5: 65.00% (40.78, 84.61) - 300 rows
10: 95.00% (75.13, 99.87) - 600 rows
15: 100.0% (83.16, 100.0) - 900 rows
20: 100.0% (83.16, 100.0) - 1200 rows
25: 100.0% (83.16, 100.0) - 1500 rows
30: 100.0% (83.16, 100.0) - 1800 rows
Time elapsed: 0 h 0 m 23 s
Post-Hoc Analyses
NOTE
Power analysis have also been used post-hoc to test if the sample size of studies was sufficient to detect meaningful effects. However, such post-hoc power calculations where the target effect size comes from the data, give misleading results (Hoenig and Heisey; Perugini, Gallucci, and Costantini) and should thus be treated with extreme care!
We begin by adding a response variable to our data. In this case, we vary the response variable ti a higher likelihood of obtaining gazes in the area of interests (AOI) in the test condition.
<- simdat %>%
simdat2 ::arrange(Condition) %>%
dplyr::mutate(
dplyr<- c(
dep sample(c("yes", "no"), 100, replace = T, prob = c(.5, .5)),
sample(c("yes", "no"), 100, replace = T, prob = c(.7, .3))
)%>%
) ::mutate_if(is.character, factor) %>%
dplyr::rename(AOI = 4)
dplyr# inspect
head(simdat2, 20)
Subject Condition Item AOI
1 Sub1 Control 1 no
2 Sub1 Control 2 yes
3 Sub1 Control 3 no
4 Sub1 Control 4 yes
5 Sub1 Control 5 no
6 Sub1 Control 6 yes
7 Sub1 Control 7 no
8 Sub1 Control 8 yes
9 Sub1 Control 9 no
10 Sub1 Control 10 yes
11 Sub2 Control 1 no
12 Sub2 Control 2 no
13 Sub2 Control 3 yes
14 Sub2 Control 4 no
15 Sub2 Control 5 no
16 Sub2 Control 6 yes
17 Sub2 Control 7 no
18 Sub2 Control 8 yes
19 Sub2 Control 9 yes
20 Sub2 Control 10 yes
Now that we have generated some data, we will fit a model to it and perform a power analysis on the observed effects.
We will fit a first model to the data. Thus, in a first step, we load the lme4
package to create a model, set a seed (to save the results and so that the results can be replicated), and then create an initial mixed-effects model.
# set seed for replicability
set.seed(12345)
# fit model
<- glmer(AOI ~ (1 | Subject) + (1 | Item) + Condition,
m3 family = "binomial",
data = simdat2
)# inspect results
summary(m3)
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: binomial ( logit )
Formula: AOI ~ (1 | Subject) + (1 | Item) + Condition
Data: simdat2
AIC BIC logLik deviance df.resid
259.3 272.5 -125.6 251.3 196
Scaled residuals:
Min 1Q Median 3Q Max
-1.687 -1.151 0.593 0.869 0.869
Random effects:
Groups Name Variance Std.Dev.
Subject (Intercept) 0 0
Item (Intercept) 0 0
Number of obs: 200, groups: Subject, 10; Item, 10
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.282 0.202 1.40 0.163
ConditionTest 0.764 0.305 2.51 0.012 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Correlation of Fixed Effects:
(Intr)
ConditinTst -0.663
optimizer (Nelder_Mead) convergence code: 0 (OK)
boundary (singular) fit: see help('isSingular')
We now check the effect sizes of the predictors in the model. We can do this by displaying the results of the model using the tab_model
function from the sjPlot
package.
# tabulate results
::tab_model(m3) sjPlot
AOI | |||
---|---|---|---|
Predictors | Odds Ratios | CI | p |
(Intercept) | 1.33 | 0.89 – 1.97 | 0.163 |
Condition [Test] | 2.15 | 1.18 – 3.90 | 0.012 |
Random Effects | |||
σ2 | 3.29 | ||
τ00 Subject | 0.00 | ||
τ00 Item | 0.00 | ||
N Subject | 10 | ||
N Item | 10 | ||
Observations | 200 | ||
Marginal R2 / Conditional R2 | 0.043 / NA |
Now, we perform power analysis on an observed effect. This analysis tells us how likely the model is to find an observed effect given the data.
NOTE
We use a very low number of simulations (10) and we use the default z-test which is suboptimal for small samples (Bolker et al.). In a proper study, you should increase the number of simulations (to at least 1000) and you should use a bootstrapping rather than a z-test (cf. Halekoh, Højsgaard, et al.).
What is the probability of finding the observed effect given the data?
# set seed for replicability
set.seed(12345)
# perform power analysis for present model
<- powerSim(m3,
m3_pwr fixed("ConditionTest", "z"),
nsim = 20
)# inspect results
m3_pwr
The results of the power analysis show that, given the data at hand, the model would have detected the effect of Conidition:Test
with a probability of m3_pwr$x
percent. However, and as stated above, the results of such post-hoc power calculations (where the target effect size comes from the data) give misleading results (Hoenig and Heisey) and should thus be treated with extreme care!
Fixing effects
We will set the effects that we obtained based on our “observed” data to check if, given the size of the data, we have enough power to detect a small effect of Condition. In a first step, we check the observed effects.
<- fixef(m3)
estimatesfixedeffects exp(estimatesfixedeffects)
(Intercept) ConditionTest
1.326 2.147
We can see that the effect of Condition is rather small which makes it very hard to detect an effect. We will now change the size of the effect of ConditionTest to represent a truly small effect, i.e. on the brink of being noise but being just strong enough to be considered small. In other words, we will set the effect so that its odds ratio is exactly 1.68.
# set seed for replicability
set.seed(12345)
# perform power analysis for small effect
fixef(m3)["ConditionTest"] <- 0.519
<- fixef(m3)
estimatesfixedeffects exp(estimatesfixedeffects)
(Intercept) ConditionTest
1.326 1.680
What is the probability of finding a weak effect given the data?
We now perform the power analysis.
# set seed for replicability
set.seed(12345)
# perform power analysis for present model
<- powerSim(m3,
m3_pwr_se fixed("ConditionTest", "z"),
nsim = 20
)# inspect results
m3_pwr_se
The data is not sufficient and would detect a weak effect of Condition
with only 8 percent accuracy
Power Analysis of Extended Data
We will now extend the data to see what sample size is needed to get to the 80 percent accuracy threshold. We begin by increasing the number of items from 10 to 30 to see if this would lead to a sufficient sample size.
# increase sample size
<- extend(m3,
m3_ai along = "Item",
n = 30
)# perform power simulation
<- powerSim(m3_ai,
m3_ai_pwr fixed("ConditionTest", "z"),
nsim = 20
)# show results
m3_ai_pwr
The data with 30 Items is sufficient and would detect a weak effect of Condition
with 18 percent accuracy
At what number of sentences are the data sufficient to detect an effect?
<- powerCurve(m3_ai,
pcurve_m3_asi_c fixed("ConditionTest", "z"),
along = "Item",
breaks = seq(5, 30, 5),
nsim = 20
)
Simulating: | |Simulating: |=== |Simulating: |====== |Simulating: |========= |Simulating: |============= |Simulating: |================ |Simulating: |=================== |Simulating: |======================= |Simulating: |========================== |Simulating: |============================= |Simulating: |================================= |Simulating: |==================================== |Simulating: |======================================= |Simulating: |========================================== |Simulating: |============================================== |Simulating: |================================================= |Simulating: |==================================================== |Simulating: |======================================================== |Simulating: |=========================================================== |Simulating: |============================================================== |Simulating: |==================================================================|(1/6) (1/6) Simulating: | |(1/6) Simulating: |=== |(1/6) Simulating: |====== |
(1/6) Simulating: |========= |
(1/6) Simulating: |============ |
(1/6) Simulating: |=============== |
(1/6) Simulating: |================== |
(1/6) Simulating: |===================== |
(1/6) Simulating: |======================== |(1/6) Simulating: |=========================== |
(1/6) Simulating: |============================== |
(1/6) Simulating: |================================= |
(1/6) Simulating: |==================================== |
(1/6) Simulating: |======================================= |
(1/6) Simulating: |========================================== |
(1/6) Simulating: |============================================= |(1/6) Simulating: |================================================ |(1/6) Simulating: |=================================================== |
(1/6) Simulating: |====================================================== |
(1/6) Simulating: |========================================================= |
(1/6) Simulating: |============================================================|(1/6) (2/6) (2/6) Simulating: | |(2/6) Simulating: |=== |
(2/6) Simulating: |====== |
(2/6) Simulating: |========= |
(2/6) Simulating: |============ |(2/6) Simulating: |=============== |
(2/6) Simulating: |================== |(2/6) Simulating: |===================== |
(2/6) Simulating: |======================== |
(2/6) Simulating: |=========================== |
(2/6) Simulating: |============================== |
(2/6) Simulating: |================================= |
(2/6) Simulating: |==================================== |(2/6) Simulating: |======================================= |
(2/6) Simulating: |========================================== |
(2/6) Simulating: |============================================= |(2/6) Simulating: |================================================ |
(2/6) Simulating: |=================================================== |
(2/6) Simulating: |====================================================== |
(2/6) Simulating: |========================================================= |
(2/6) Simulating: |============================================================|(2/6) (3/6) (3/6) Simulating: | |(3/6) Simulating: |=== |
(3/6) Simulating: |====== |
(3/6) Simulating: |========= |
(3/6) Simulating: |============ |(3/6) Simulating: |=============== |
(3/6) Simulating: |================== |(3/6) Simulating: |===================== |
(3/6) Simulating: |======================== |
(3/6) Simulating: |=========================== |
(3/6) Simulating: |============================== |
(3/6) Simulating: |================================= |
(3/6) Simulating: |==================================== |
(3/6) Simulating: |======================================= |
(3/6) Simulating: |========================================== |
(3/6) Simulating: |============================================= |
(3/6) Simulating: |================================================ |
(3/6) Simulating: |=================================================== |
(3/6) Simulating: |====================================================== |
(3/6) Simulating: |========================================================= |
(3/6) Simulating: |============================================================|(3/6) (4/6) (4/6) Simulating: | |(4/6) Simulating: |=== |
(4/6) Simulating: |====== |
(4/6) Simulating: |========= |
(4/6) Simulating: |============ |
(4/6) Simulating: |=============== |
(4/6) Simulating: |================== |(4/6) Simulating: |===================== |
(4/6) Simulating: |======================== |
(4/6) Simulating: |=========================== |
(4/6) Simulating: |============================== |
(4/6) Simulating: |================================= |
(4/6) Simulating: |==================================== |
(4/6) Simulating: |======================================= |
(4/6) Simulating: |========================================== |
(4/6) Simulating: |============================================= |
(4/6) Simulating: |================================================ |
(4/6) Simulating: |=================================================== |
(4/6) Simulating: |====================================================== |
(4/6) Simulating: |========================================================= |
(4/6) Simulating: |============================================================|(4/6) (5/6) (5/6) Simulating: | |(5/6) Simulating: |=== |
(5/6) Simulating: |====== |
(5/6) Simulating: |========= |
(5/6) Simulating: |============ |
(5/6) Simulating: |=============== |
(5/6) Simulating: |================== |(5/6) Simulating: |===================== |
(5/6) Simulating: |======================== |
(5/6) Simulating: |=========================== |
(5/6) Simulating: |============================== |
(5/6) Simulating: |================================= |
(5/6) Simulating: |==================================== |
(5/6) Simulating: |======================================= |(5/6) Simulating: |========================================== |
(5/6) Simulating: |============================================= |
(5/6) Simulating: |================================================ |
(5/6) Simulating: |=================================================== |
(5/6) Simulating: |====================================================== |
(5/6) Simulating: |========================================================= |
(5/6) Simulating: |============================================================|(5/6) (6/6) (6/6) Simulating: | |(6/6) Simulating: |=== |
(6/6) Simulating: |====== |
(6/6) Simulating: |========= |
(6/6) Simulating: |============ |
(6/6) Simulating: |=============== |
(6/6) Simulating: |================== |(6/6) Simulating: |===================== |
(6/6) Simulating: |======================== |
(6/6) Simulating: |=========================== |
(6/6) Simulating: |============================== |
(6/6) Simulating: |================================= |
(6/6) Simulating: |==================================== |
(6/6) Simulating: |======================================= |
(6/6) Simulating: |========================================== |
(6/6) Simulating: |============================================= |(6/6) Simulating: |================================================ |
(6/6) Simulating: |=================================================== |
(6/6) Simulating: |====================================================== |
(6/6) Simulating: |========================================================= |
(6/6) Simulating: |============================================================|(6/6)
# show plot
plot(pcurve_m3_asi_c)
We reach the 80 percent threshold with about 25 subjects.
Resources and Links
If you want to know more, please have a look/go at the following resources:
Also check out Jacob Westfall’s website for Power Analysis with Crossed Random Effects based on “Statistical Power and Optimal Design in Experiments in Which Samples of Participants Respond to Samples of Stimuli” (2014).
See “The Cave of Shadows: Addressing the Human Factor with Generalized Additive Mixed Models” (2017) for power in generalized additive mixed models and why maximal models can be ill-advised. Also, see Matuscheka et al. (2017) for a simulation study on the detrimental effects of complex random effect structures on power.
Citation & Session Info
Schweinberger, Martin. 2025. Power Analysis in R. Brisbane: The University of Queensland. url: https://ladal.edu.au/tutorials/pwr.html (Version 2025.01.30).
@manual{schweinberger2025pwr,
author = {Schweinberger, Martin},
title = {Power Analysis in R},
note = {tutorials/pwr/pwr.html},
year = {2025},
organization = "The University of Queensland, Australia. School of Languages and Cultures},
address = {Brisbane},
edition = {2025.01.30}
}
sessionInfo()
R version 4.4.2 (2024-10-31)
Platform: x86_64-pc-linux-gnu
Running under: Ubuntu 24.04.1 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.12.0
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.12.0
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
time zone: Australia/Brisbane
tzcode source: system (glibc)
attached base packages:
[1] stats graphics grDevices datasets utils methods base
other attached packages:
[1] entity_0.1.0 flextable_0.9.7 knitr_1.49 DT_0.33
[5] effectsize_1.0.0 simr_1.0.7 sjPlot_2.8.17 lme4_1.1-35.5
[9] Matrix_1.7-2 pwr_1.3-0 DescTools_0.99.58 lubridate_1.9.4
[13] forcats_1.0.0 stringr_1.5.1 dplyr_1.1.4 purrr_1.0.2
[17] readr_2.1.5 tidyr_1.3.1 tibble_3.2.1 ggplot2_3.5.1
[21] tidyverse_2.0.0
loaded via a namespace (and not attached):
[1] gld_2.6.6 sandwich_3.1-1 readxl_1.4.3
[4] rlang_1.1.5 magrittr_2.0.3 multcomp_1.4-26
[7] e1071_1.7-16 compiler_4.4.2 mgcv_1.9-1
[10] systemfonts_1.1.0 vctrs_0.6.5 pkgconfig_2.0.3
[13] fastmap_1.2.0 backports_1.5.0 labeling_0.4.3
[16] rmarkdown_2.29 tzdb_0.4.0 haven_2.5.4
[19] nloptr_2.1.1 ragg_1.3.3 xfun_0.49
[22] jsonlite_1.8.9 uuid_1.2-1 sjmisc_2.8.10
[25] ggeffects_2.0.0 parallel_4.4.2 broom_1.0.7
[28] R6_2.5.1 stringi_1.8.4 car_3.1-3
[31] boot_1.3-31 cellranger_1.1.0 estimability_1.5.1
[34] assertthat_0.2.1 Rcpp_1.0.13-1 iterators_1.0.14
[37] klippy_0.0.0.9500 zoo_1.8-12 pacman_0.5.1
[40] parameters_0.24.0 splines_4.4.2 timechange_0.3.0
[43] tidyselect_1.2.1 rstudioapi_0.17.1 abind_1.4-8
[46] yaml_2.3.10 codetools_0.2-20 RLRsim_3.1-8
[49] sjlabelled_1.2.0 plyr_1.8.9 lattice_0.22-6
[52] withr_3.0.2 bayestestR_0.15.0 askpass_1.2.1
[55] coda_0.19-4.1 evaluate_1.0.3 survival_3.8-3
[58] proxy_0.4-27 zip_2.3.1 xml2_1.3.6
[61] pillar_1.10.1 carData_3.0-5 renv_1.0.11
[64] insight_1.0.0 generics_0.1.3 hms_1.1.3
[67] munsell_0.5.1 scales_1.3.0 rootSolve_1.8.2.4
[70] minqa_1.2.8 xtable_1.8-4 class_7.3-23
[73] glue_1.8.0 gdtools_0.4.1 emmeans_1.10.6
[76] lmom_3.2 tools_4.4.2 data.table_1.16.4
[79] Exact_3.3 mvtnorm_1.3-2 grid_4.4.2
[82] plotrix_3.8-4 datawizard_0.13.0 colorspace_2.1-1
[85] nlme_3.1-166 performance_0.12.4 Formula_1.2-5
[88] cli_3.6.3 textshaping_0.4.1 officer_0.6.7
[91] fontBitstreamVera_0.1.1 expm_1.0-0 binom_1.1-1.1
[94] sjstats_0.19.0 gtable_0.3.6 fontquiver_0.2.1
[97] digest_0.6.37 pbkrtest_0.5.3 TH.data_1.1-2
[100] htmlwidgets_1.6.4 farver_2.1.2 htmltools_0.5.8.1
[103] lifecycle_1.0.4 httr_1.4.7 openssl_2.3.0
[106] fontLiberation_0.1.0 MASS_7.3-64