This document serves as an annotated bibliography of papers and other resources that I’ve found to be useful as an author and reviewer. The entries are loosely organized by topic.

I have also included selected slide decks for methods talks that I’ve given since September 1, 2020 and a link to Comptrasts, a web application for conducting custom contrast tests.

This started as a personal resource used to respond quickly to common stats questions. Please note that this list is not meant to be exhaustive in any way. If you have any additional suggestions for inclusion, please send them my way. I hope you find this helpful. Lastly, any errors are my own.

Curriculum Vitae

Guggenmos CV - Jan 2023

Resources by Topic

Graduate Statistics Textbooks

Myers et al. (2010)

This is a great reference for basic graduate-level experimental statistics. Covers most 1st-year stats topics understandably. Includes information on Expected Mean Squares derivation which is not always included in these texts.

Maxwell et al. (2018)

Whenever I don’t know where to find something, I turn to this book first. This is a great reference for many analysis questions that don’t come up often, but need to considered.

Rencher and Schaalje (2008)

This text is a reference for linear modeling that is fairly heavy on linear algebra, but does a great job at helping you understand how linear modeling works from an “under the hood” perspective. Keep in mind that if you aren’t comfortable with linear algebra and/or matrix notation, you may want to review that first.

Huitema (2011)

This book is positioned as a reference for ANCOVA, which it does well, but it also has a lot of good information about ANOVA, effect sizes, comparisons between the Fisher and GLM approaches to covariance analysis, and even longitudinal designs. It seems to be frequently cited as a reference for papers in accounting that use an EE-based approach.

Experimental Design

Shadish et al. (2002)

I recommend that this is read (and referenced) by anybody that designs experiments. This is a very detailed look at experimental and quasi-experimental designs and the threats to validity that come from these design choices.

Rosenthal and Rosnow (1991)

This text also brings forward great checklists for threats to validity that should be considered when designing experiments.

Hauser et al. (2018)

An interesting discussion about whether manipulation checks do more harm than good. A good argument for not including manipulation checks when they aren’t really needed.

Libby et al. (2002), Libby (1981), and Libby and Lewis (1982)

Classic papers related to experimental design in accounting. The early 1980s articles / book chapters are more difficult to track down, but well worth the effort - especially to obtain a high-level understanding of “early work” in experimental accounting research. Focused on financial and auditing research, but applicable to other topical areas.

Bloomfield et al. (2016)

This discusses the strengths and weaknesses of different types of research paradigms and provides advice for approaching the data gathering process.

Gelman (2018)

A pretty scathing assessment of null hypothesis significance testing (NHST) and how its prevalence has affected research in many fields, especially social psychology. Interesting, but somewhat depressing, read.

Asay et al. (2019)

A working paper that discusses the prevalence of process theory testing in accounting. The paper discusses how research design and analysis choices can affect the ability of researchers to draw strong causal inferences. Moderation, mediation, and multiple experiments-style approaches are discussed.

Button et al. (2013)

Great paper that discusses how low sample sizes not only hurt detection of effects (Type II error), but also how low sample sizes can also lead to Type I error. Very applicable for our research.

Podsakoff et al. (2003)

Paper on common methods bias that suggests some ways to control for these sources of bias. Certainly relevant to accounting research, as we commonly shy away from multi-method work.

Sutton and Staw (1995)

This is another paper that should be a must read. Does an excellent job at discussing what makes something theory, and those things we call theory that may not be.

Charness et al. (2012)

Gives a good overview on the pros and cons of between vs. within-subjects design. The paper is written from an econ perspective and is very approachable.

Descriptive Statistics, Data Visualization, and Data Science

Wickham and Grolemund (2017)

Whether you’re an R user or not, this text is a great reference for establishing a data science workflow. Having a data science workflow that is reproducible is useful for saving time and preserving accuracy when describing experimental results.

Tukey (1980)

This paper makes a strong case for the importance of descriptive work and data visualization, in general.

Gelman (2011), Gelman et al. (2002), and Vessey (1991)

This set of papers discusses the relative benefits of graphs vs. tables. For many of the results we publish, graphs are probably more useful even though tables are often more common.

ANOVA, ANCOVA, Contrast Testing, Simple Effects and other comparisons

Rutherford (2001)

Good focused text on ANOVA and ANCOVA from a GLM approach. Lots of good examples that can be worked through.

Gelman (2005)

This paper makes the case for why ANOVA is incredibly important for experimental analysis. Gelman also describes how ANOVA and regression are not equivalent, contrary to what is taught in many stats courses. Finally, Gelman makes the case for thinking about ANOVA in a hierarchical modeling framework.

Cohen (2002)

This paper provides a technique for reproducing an ANOVA table from summary statistics. This can be implemented in excel easily. This can be really useful when doing reviews. Note that there does appear to be a typo in the paper that makes it hard to double-check your work. Happy to help if you run into that issue.

Yzerbyt et al. (2004), Miller and Chapman (2001), Borm et al. (2007), and Evans and Anastasio (1968)

These papers all address methodological concerns related to ANCOVA. Missteps in ANCOVA can have serious effects on Type I error with the types of designs that are common in experimental accounting research. If you’re considering using an ANCOVA, this is something you’ll want to take a close look at.

Buckless and Ravenscroft (1990), Guggenmos et al. (2018), Rosenthal et al. (2000), Shieh (2017), Schad et al. (2020),and Lai and Kelley (2012)

These are all resources for information about contrast testing. The Shieh paper talks about contrasts in ANCOVA, but is restricted to 1-way ANCOVA designs.

Gelman and Stern (2006)

This is one of my favorite papers. It’s so tempting to for us heuristically compare two p-values and conclude more than we should. This is exactly why we can’t use simple effects to demonstrate a significant interaction.

Algina and Olejnik (1984), Brown and Forsythe (1974), Delacre et al. (2017), Fagerland and Sandvik (2009), Schucany and Tony Ng (2006), and Zimmerman (2004)

These papers are all about t-tests and other simple comparisons. The main takeaway is that we should probably all be using Welch’s t instead of regular tests. There are also some papers here that discuss other considerations for what we think of as really simple tests.

Brauer and Curtin (2018), Barr et al. (2013), and Judd et al. (2012)

These papers make the argument that Linear Mixed Effects Models (LMeMs) are more appropriate than Repeated Measures ANOVA for hypothesis testing. I don’t disagree with the sentiment, but also note that many of the designs that are common to experimental accounting research would generate mathematically-equivalent results. (Thanks to Amanda Carlson for pointing these out to me.)

Piercey (2022)

This paper lays out some very common missteps made when using covariates - including why just “throwing” a covariate into an ANOVA doesn’t really answer the question that most people want to know when asking if a researcher has “controlled for X”. As of the time of this writing, the paper is forthcoming in AJPT.

Regression (OLS, GLM, and other)

Gomila (2020)

This paper shows analytically and through simulation that OLS regression is more appropriate than logistic regression for a binary outcome variable when fixed-effects IVs are being used. This is the case much of time in accounting, so very applicable!

Brambor et al. (2006)

This paper is from the political econ literature and is mostly concerned with observational data, but does a great job at explaining the effects of model misspecification.

Non-parametric Statistics

Sawilowsky et al. (1989) and Thompson (1991)

These papers examine performance of rank-transformed vs. parametric analyses in the context of ANOVA.

Jonckheere (1954)

A reference for the Jonckheere-Terpstra test for ordered cell means.

Structural Equation Modeling and Conditional Process Analysis

Hayes (2018)

This is the main reference for PROCESS analysis and covers everything from simple mediation to more complex conditional process analysis designs. Chapter 14 answers many very common questions and provides a good framework for approaching these analyses.

Hayes et al. (2017)

This paper discusses the choice of using SEM vs. PROCESS analysis, especially with respect to models without latent variable measurement. Takeaway is that many of the models we look at in accounting research (simple mediation, no measurement model) SEM and process lead to very similar results.

Hayes and Montoya (2017)

This paper extends PROCESS to account for multi-categorical IVs and continuous moderators using Johnson-Neyman regression. This paper is important when using PROCESS with more complex designs.

Cole and Preacher (2014)

This paper addresses the problem that occurs when manifest (not latent) variables are used in path analysis. More specifically, even small amounts of measurement error can lead to problems with both Type I and Type II error, as well as biased path coefficients.

MacKinnon et al. (2000)

This paper discusses mediation, confounding, and suppression - all of which may occur when a researcher is looking to examine processes underlying causal relationships. The paper is premised on a Baron and Kenny (1986) sequential regression approach, but the concepts here are still important to understand.

Yzerbyt et al. (2018)

This paper calls into question Hayes’ claim that the index approach (versus the components approach) of demonstrating mediation is superior. Yzerbyt argues that common implementations of the index approach increase Type I error.

Hu et al. (1992), Hu and Bentler (1999), Hu and Bentler (1998), Jiang and Yuan (2017), and Marsh et al. (2004)

These papers all discuss fit indices. The Hu and Bentler articles are commonly cited “cutoffs” for good fit in SEM models. However, Marsh et al. (2004) argues that these are mostly arbitrary and some of the assumptions they were built from are tenuous at best.

Sobel (1982) and Sobel (1986)

Classic articles that discuss Sobel tests (which have largely been abandoned in our literature).

Montoya (2016)

This paper is Amanda Montoya’s dissertation. The paper goes into more detail about expanding Johnson-Neyman to categorical IVs. Should be looked at in concert with Hayes and Montoya (2017).

Bayesian Data Analysis (BDA)

van de Schoot et al. (2017)

Large systematic review of the use of Bayesian analysis in psychology.

Bem (2011) and Wagenmakers et al. (2011)

This pair of papers was instrumental in bringing the NHST vs. BDA debate into mainstream cognitive psychology. The Bem (2011) paper presents evidence that future events may retroactively affect responses (Psi). Wagenmakers et al. (2011) argue that Bem’s (2011) results are more evidence that our analysis techniques need to change than that evidence that pre-cognition exists.

Leemis and McQueston (2008)

This paper presents an overview of the relationships between different univariate distributions. Knowledge of these relationships can be very useful when conducting BDA.

Kruschke (2015)

This is a very approachable, but useful, textbook on conducting BDA from a psych methods perspective. The book comes with code examples in BUGS, JAGS, and STAN that can be adapted to suit many experimental designs. If you read a couple of the introductory BDA articles and want to know more, this is a good place to look.

Gelman et al. (2013)

This is a classic reference and textbook on BDA. The good news is that it is detailed and covers many nuances of BDA. The bad news is that it may or may not be approachable for those that are not very mathematically inclined. Highly recommend, but know that it may be a difficult read depending on your background.

Dienes (2014)

Very approachable article that discusses how to use BDA to shed more light on (unexpected) null results. Applicable to our work as we sometimes see papers that argue that a null is due to a lack of power or that a null is due to the absence of an effect. This paper discusses both of those scenarios as well as the detection of additivity vs. interaction.

Kruschke (2018)

Discussion of ROPE (Region of Practical Equivalence) analysis with respect to Bayesian parameter estimation. Straightforward discussion of the ROPE decision rule process.

Kruschke and Liddell (2018)

Excellent intro to BDA techniques to those who haven’t seen much of it.

Kruschke and Liddell (2018)

This article talks about hypothesis testing and confidence (credible) intervals from both a NHST and BDA perspective. This is good for learning how to think about the benefits that BDA can give relative to NHST.

Kruschke (2011)

This paper is an introduction to the two paradigms of explicit tests of null hypotheses under BDA (Bayesian parameter estimation and Bayes Factor Analysis) and contrast the two approaches.

Guggenmos and Bennett (2019)

This paper provides accounting researchers with an approach to BDA that uses both model comparison and Bayes Factor analysis. The analysis is conducted using JASP and default priors, making it fairly painless for researchers less familiar with coding and syntax-based stats software.

Wagenmakers, Love, et al. (2018) and Wagenmakers, Marsman, et al. (2018)

These two papers provide a “user-friendly” introduction to conducting BDAs with the JASP software package.

van Doorn et al. (2020)

Provides reporting guidelines for authors presenting BDAs (in psychology).

Pericchi (1996), Rouder and Morey (2012), Rouder et al. (2017), Jarosz and Wiley (2014), Ly et al. (2016), Rouder et al. (2012), and Berger and Pericchi (1996)

These papers are all related to Bayes Factor analysis, which provides information about the relative likelihood of too models (one of which can be a null effects model, allowing for explicit null hypothesis testing).

van de Schoot et al. (2015)

Information on the effects of small data sets on inferences drawn from BDA.

Archival Methods

Even more so that the other sections, this section is far from exhaustive. These are papers that I have found useful when thinking about archival analyses (especially as they have related to research questions I am interested in). This is definitely a section where additional suggestions are welcomed.

Leone et al. (2019)

Discussion of how windsorizing and other techniques used to deal with outliers and other influential observations can have effects on inference. Advocates for the use of robust regression in these circumstances.

Shipman et al. (2017)

Paper that discusses some of the issues that arise when using propensity score matching (PSM) in accounting settings, the rise of this method, and the sensitivity of results to small design choices.

Armstrong and Kepler (2018)

This is paper that discusses the “fundamental role of theory in drawing causal inferences from empirical evidence.” I don’t know enough about the underlying paper and archival methods to conclude which conclusion I am in favor of, but the role of theory discussion is excellent.

References

Algina, J., and S. F. Olejnik. 1984. Implementing the Welch-James Procedure with Factorial Designs. Educational and Psychological Measurement 44 (1): 39–48.
Armstrong, C. S., and J. D. Kepler. 2018. Theory, research design assumptions, and causal inferences. Journal of Accounting and Economics 66 (2-3): 366–73.
Asay, S. H., R. Guggenmos, K. Kadous, L. Koonce, and R. Libby. 2019. Theory Testing and Process Evidence in Accounting Experiments. Working {{Paper}}. University of Iowa, Cornell University, Emory University, University of Texas - Austin.
Barr, D. J., R. Levy, C. Scheepers, and H. J. Tily. 2013. Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language 68 (3): 255–278.
Bem, D. J. 2011. Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of personality and social psychology 100 (3): 407–425.
Berger, J. O., and L. R. Pericchi. 1996. The Intrinsic Bayes Factor for Model Selection and Prediction. Journal of the American Statistical Association 91 (433): 109–122.
Bloomfield, R., M. W. Nelson, and E. Soltes. 2016. Gathering Data for Archival, Field, Survey, and Experimental Accounting Research. Journal of Accounting Research 54 (2): 341–395.
Borm, G. F., J. Fransen, and W. A. J. G. Lemmens. 2007. A simple sample size formula for analysis of covariance in randomized clinical trials. Journal of Clinical Epidemiology 60 (12): 1234–1238.
Brambor, T., W. R. Clark, and M. Golder. 2006. Understanding Interaction Models: Improving Empirical Analyses. Political Analysis 14 (1): 63–82.
Brauer, M., and J. J. Curtin. 2018. Linear mixed-effects models and the analysis of nonindependent data: A unified framework to analyze categorical and continuous independent variables that vary within-subjects and/or within-items. Psychological Methods 23 (3): 389–411.
Brown, M. B., and A. B. Forsythe. 1974. Robust Tests for the Equality of Variances. Journal of the American Statistical Association 69 (346): 364–367.
Buckless, F. A., and S. P. Ravenscroft. 1990. Contrast Coding: A Refinement of ANOVA in Behavioral Analysis. The Accounting Review 65 (4): 933–945.
Button, K. S., J. P. A. Ioannidis, C. Mokrysz, B. A. Nosek, J. Flint, E. S. J. Robinson, and M. R. Munafò. 2013. Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience 14 (5): 365–376.
Charness, G., U. Gneezy, and M. A. Kuhn. 2012. Experimental methods: Between-subject and within-subject design. Journal of Economic Behavior & Organization 81 (1): 1–8.
Cohen, B. H. 2002. Calculating a Factorial ANOVA From Means and Standard Deviations. Understanding Statistics 1 (3): 191–203.
Cole, D. A., and K. J. Preacher. 2014. Manifest variable path analysis: Potentially serious and misleading consequences due to uncorrected measurement error. Psychological Methods 19 (2): 300–315.
Delacre, M., D. Lakens, and C. Leys. 2017. Why Psychologists Should by Default Use Welch’s t-test Instead of Student’s t-test. International Review of Social Psychology 30 (1): 92.
Dienes, Z. 2014. Using Bayes to get the most out of non-significant results. Frontiers in Psychology 5. 781th series: 1–17.
Evans, S. H., and E. J. Anastasio. 1968. Misuse of analysis of covariance when treatment effect and covariate are confounded. Psychological Bulletin 69 (4): 225–234.
Fagerland, M. W., and L. Sandvik. 2009. Performance of five two-sample location tests for skewed distributions with unequal variances. Contemporary Clinical Trials 30 (5): 490–496.
Gelman, A. 2005. Analysis of Variance: Why It Is More Important than Ever. The Annals of Statistics 33 (1): 1–31.
———. 2011. Why Tables Are Really Much Better Than Graphs. Journal of Computational and Graphical Statistics 20 (1): 3–7.
———. 2018. The Failure of Null Hypothesis Significance Testing When Studying Incremental Changes, and What to Do About It. Personality & Social Psychology Bulletin 44 (1): 16–23.
Gelman, A., J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin. 2013. Bayesian data analysis. 3rd ed. Boca Raton FL: CRC press.
Gelman, A., C. Pasarica, and R. Dodhia. 2002. Let’s Practice What We Preach: Turning Tables into Graphs. The American Statistician 56 (2): 121–130.
Gelman, A., and H. Stern. 2006. The Difference Between Significant and Not Significant is not Itself Statistically Significant. The American Statistician 60 (4): 328–331.
Gomila, R. 2020. Logistic or linear? Estimating causal effects of experimental treatments on binary outcomes using regression analysis. Journal of Experimental Psychology: General Advance Online Publication: 1–11.
Guggenmos, R. D., and G. B. Bennett. 2019. The Effects of Company Image and Communication Platform Alignment on Investor Information Processing. Working {{Paper}}. Cornell University and University of Massachusetts.
Guggenmos, R. D., M. D. Piercey, and C. P. Agoglia. 2018. Custom Contrast Testing: Current Trends and a New Approach. The Accounting Review 93 (5): 223–244.
Hauser, D. J., P. C. Ellsworth, and R. Gonzalez. 2018. Are Manipulation Checks Necessary? Frontiers in Psychology 9. 998th series.
Hayes, A. F. 2018. Introduction to Mediation, Moderation, and Conditional Process Analysis, Second Edition : A Regression-Based Approach. Second. Methodology in the Social Sciences. New York: The Guilford Press.
Hayes, A. F., and A. K. Montoya. 2017. A Tutorial on Testing, Visualizing, and Probing an Interaction Involving a Multicategorical Variable in Linear Regression Analysis. Communication Methods and Measures 11 (1): 1–30.
Hayes, A. F., A. K. Montoya, and N. J. Rockwood. 2017. The analysis of mechanisms and their contingencies: PROCESS versus structural equation modeling. Australasian Marketing Journal (AMJ) 25 (1): 76–81.
Hu, L., and P. M. Bentler. 1998. Fit Indices in Covariance Structure Modeling: Sensitivity to Underparameterized Model Misspecification. Psychological Methods 3 (4): 424–53.
Hu, L., and P. M. Bentler. 1999. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal 6 (1): 1–55.
Hu, L., P. M. Bentler, and Y. Kano. 1992. Can test statistics in covariance structure analysis be trusted? Psychological bulletin 112 (2): 351–62.
Huitema, B. E. 2011. The Analysis of Covariance and Alternatives: Statistical Methods for Experiments, Quasi-Experiments, and Single-Case Studies. Second. Hoboken, N.J.
Jarosz, A. F., and J. Wiley. 2014. What Are the Odds? A Practical Guide to Computing and Reporting Bayes Factors. The Journal of Problem Solving 7 (1).
Jiang, G., and K.-H. Yuan. 2017. Four New Corrected Statistics for SEM With Small Samples and Nonnormally Distributed Data. Structural Equation Modeling: A Multidisciplinary Journal 24 (4): 479–494.
Jonckheere, A. R. 1954. A Distribution-Free k-Sample Test Against Ordered Alternatives. Biometrika 41 (1/2): 133–145.
Judd, C. M., J. Westfall, and D. A. Kenny. 2012. Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem. Journal of Personality and Social Psychology 103 (1): 54–69.
Kruschke, J. K. 2011. Bayesian Assessment of Null Values Via Parameter Estimation and Model Comparison. Perspectives on Psychological Science 6 (3): 299–312.
———. 2015. Doing Bayesian Data Analysis:A Tutorial with R, JAGS, and Stan. Second Edition. Boston: Academic Press.
Kruschke, J. K. 2018. Rejecting or Accepting Parameter Values in Bayesian Estimation. Advances in Methods and Practices in Psychological Science 1 (2): 270–280.
Kruschke, J. K., and T. M. Liddell. 2018. Bayesian data analysis for newcomers. Psychonomic Bulletin & Review 25 (1): 155–177.
Lai, K., and K. Kelley. 2012. Accuracy in parameter estimation for ANCOVA and ANOVA contrasts: Sample size planning via narrow confidence intervals: ANCOVA and ANOVA: Sample Size Planning. British Journal of Mathematical and Statistical Psychology 65 (2): 350–370.
Leemis, L. M., and J. T. McQueston. 2008. Univariate Distribution Relationships. The American Statistician 62 (1): 45–53.
Leone, A. J., M. Minutti-Meza, and C. E. Wasley. 2019. Influential Observations and Inference in Accounting Research. The Accounting Review 94 (6): 337–364.
Libby, R. 1981. Accounting and Human Information Processing: Theory and Applications. Englewood Cliffs, New Jersey: Prentice-Hall.
Libby, R., R. Bloomfield, and M. W. Nelson. 2002. Experimental research in financial accounting. Accounting, Organizations and Society 27 (8): 775–810.
Libby, R., and B. L. Lewis. 1982. Human information processing research in accounting: The state of the art in 1982. Accounting, Organizations and Society 7 (3): 231–285.
Ly, A., J. Verhagen, and E.-J. Wagenmakers. 2016. Harold Jeffreys’s default Bayes factor hypothesis tests: Explanation, extension, and application in psychology. Journal of Mathematical Psychology 72: 19–32.
MacKinnon, D. P., J. L. Krull, and C. M. Lockwood. 2000. Equivalence of the mediation, confounding and suppression effect. Prevention Science 1 (4): 173–181.
Marsh, H. W., K.-T. Hau, and Z. Wen. 2004. In Search of Golden Rules: Comment on Hypothesis-Testing Approaches to Setting Cutoff Values for Fit Indexes and Dangers in Overgeneralizing Hu and Bentler’s (1999) Findings. Structural Equation Modeling: A Multidisciplinary Journal 11 (3): 320–341.
Maxwell, S. E., H. D. Delaney, and K. Kelley. 2018. Designing experiments and analyzing data: A model comparison perspective. 3rd ed. New York, NY: Routledge.
Miller, G. M., and J. P. Chapman. 2001. Misunderstanding analysis of covariance. Journal of Abnormal Psychology 110 (1): 40–48.
Montoya, A. K. 2016. Extending the Johnson-Neyman Procedure to Categorical Independent Variables: Mathematical Derivations and Computational Tools. Ohio State University.
Myers, J. L., A. D. Well, and R. F. Lorch Jr. 2010. Research design and statistical analysis. New York, NY: Routledge.
Pericchi, L. R. 1996. The Intrinsic Bayes Factor for Model Selection and Prediction. Journal of the American Statistical Association 91 (443): 109–122.
Piercey, M. D. 2022. Throw it in as a Covariate? Common Problems using Measured Control Variables in Experimental Research. AUDITING: A Journal of Practice & Theory.
Podsakoff, P. M., S. B. MacKenzie, J.-Y. Lee, and N. P. Podsakoff. 2003. Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology 88 (5): 879–903.
Rencher, A. C., and G. B. Schaalje. 2008. Linear models in statistics. 2nd ed. Hoboken, N.J: Wiley-Interscience.
Rosenthal, R., and R. L. Rosnow. 1991. Essentials of Behavioral Research: Methods and Data Analysis. Second. New York, NY: McGraw-Hill.
Rosenthal, R., R. L. Rosnow, and D. B. Rubin. 2000. Contrasts and effect sizes in behavioral research: A correlational approach. 1st ed. Cambridge, UK: Cambridge University Press.
Rouder, J. N., and R. D. Morey. 2012. Default Bayes Factors for Model Selection in Regression. Multivariate Behavioral Research 47 (6): 877–903.
Rouder, J. N., R. D. Morey, P. L. Speckman, and J. M. Province. 2012. Default Bayes factors for ANOVA designs. Journal of Mathematical Psychology 56 (5): 356–374.
Rouder, J. N., R. D. Morey, J. Verhagen, A. R. Swagman, and E.-J. Wagenmakers. 2017. Bayesian analysis of factorial designs. Psychological Methods 22 (2): 304–321.
Rutherford, A. 2001. Introducing ANOVA and ANCOVA: A GLM approach. London: Sage.
Sawilowsky, S. S., R. C. Blair, and J. J. Higgins. 1989. An Investigation of the Type I Error and Power Properties of the Rank Transform Procedure in Factorial ANOVA. Journal of Educational Statistics 14 (3): 255–267.
Schad, D. J., S. Hohenstein, S. Vasishth, and R. Kliegl. 2020. How to capitalize on a priori contrasts in linear (mixed) models: A tutorial. Journal of Memory and Language. 104038th series.
Schucany, W. R., and H. K. Tony Ng. 2006. Preliminary Goodness-of-Fit Tests for Normality do not Validate the One-Sample Student t. Communications in Statistics - Theory and Methods 35 (12): 2275–2286.
Shadish, W., T. D. Cook, and D. T. Campbell. 2002. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Houghton, Mifflin and Company.
Shieh, G. 2017. Power and Sample Size Calculations for Contrast Analysis in ANCOVA. Multivariate Behavioral Research 52 (1): 1–11.
Shipman, J. E., Q. T. Swanquist, and R. L. Whited. 2017. Propensity Score Matching in Accounting Research. The Accounting Review 92 (1): 213–244.
Sobel, M. E. 1982. Asymptotic Confidence Intervals for Indirect Effects in Structural Equation Models. Sociological Methodology 13: 290.
———. 1986. Some New Results on Indirect Effects and Their Standard Errors in Covariance Structure Models. Sociological Methodology 16: 159.
Sutton, R. I., and B. M. Staw. 1995. What Theory is Not. Administrative Science Quarterly 40 (3): 371.
Thompson, G. L. 1991. A Note on the Rank Transform for Interactions. Biometrika 78 (3): 697–701.
Tukey, J. W. 1980. We Need Both Exploratory and Confirmatory. The American Statistician 34 (1): 23–25.
van de Schoot, R., J. J. Broere, K. H. Perryck, M. Zondervan-Zwijnenburg, and N. E. van Loey. 2015. Analyzing Small Data ets using Bayesian estimation: The case of posttraumatic stress symptoms following mechanical ventilation in burn survivors. European Journal of Psychotraumatology 6 (251216).
van de Schoot, R., S. D. Winter, O. Ryan, M. Zondervan-Zwijnenburg, and S. Depaoli. 2017. A systematic review of Bayesian articles in psychology: The last 25 years. Psychological Methods 22 (2): 217–239.
van Doorn, J., D. van den Bergh, U. Bohm, F. Dablander, K. Derks, T. Draws, N. J. Evans, et al. 2020. The JASP Guidelines for Conducting and Reporting a Bayesian Analysis. Preprint.
Vessey, I. 1991. Cognitive Fit: A Theory-Based Analysis of the Graphs Versus Tables Literature*. Decision Sciences 22 (2): 219–240.
Wagenmakers, E.-J., J. Love, M. Marsman, T. Jamil, A. Ly, J. Verhagen, R. Selker, et al. 2018. Bayesian inference for psychology. Part II: Example applications with JASP. Psychonomic Bulletin & Review 25 (1): 58–76.
Wagenmakers, E.-J., M. Marsman, T. Jamil, A. Ly, J. Verhagen, J. Love, R. Selker, et al. 2018. Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications. Psychonomic Bulletin & Review 25 (1): 35–57.
Wagenmakers, E.-J., R. Wetzels, D. Borsboom, and H. L. J. van der Maas. 2011. Why psychologists must change the way they analyze their data: The case of psi: Comment on Bem (2011). Journal of Personality and Social Psychology 100 (3): 426–432.
Wickham, H., and G. Grolemund. 2017. R for data science: Import, tidy, transform, visualize, and model data. 4th release. Sebastopol, CA: O’Reilly Media.
Yzerbyt, V. Y., D. Muller, and C. M. Judd. 2004. Adjusting researchers’ approach to adjustment: On the use of covariates when testing interactions. Journal of Experimental Social Psychology 40 (3): 424–431.
Yzerbyt, V., D. Muller, C. Batailler, and C. M. Judd. 2018. New recommendations for testing indirect effects in mediational models: The need to report and test component paths. Journal of Personality and Social Psychology 115 (6): 929–943.
Zimmerman, D. W. 2004. A note on preliminary tests of equality of variances. British Journal of Mathematical and Statistical Psychology 57 (1): 173–181.