A comparison of three effect size indices for count-based outcomes in Single-Case Design studies
Author(s) | Shrestha, Pragya | |
Date Accessioned | 2023-10-09T17:42:01Z | |
Date Available | 2023-10-09T17:42:01Z | |
Publication Date | 2023 | |
SWORD Update | 2023-09-20T19:17:21Z | |
Abstract | In Single-Case Designs (SCD), the outcome variable most commonly involves some form of count data. However, statistical analyses and associated effect size (ES) calculations for count outcomes have only recently been proposed. Three recently proposed ES methods for count data are: Nonlinear Bayesian effect size (Rindskopf, 2014), Log Response Ratio effect size (Pustejovsky, 2018), and Bayesian Rate Ratio effect size (Natesan Batley, Shukla Mehta, & Hitchcock, 2021). Although all three methods calculate ES for count outcome data and can be used with an ABAB design, they use either different statistical modeling or a different estimation framework (Bayesian or frequentist), they may assume the presence or absence of autocorrelation, which is frequently present in SCD data and it is yet to examine how the ES and standard error estimates from these three ES indices are affected by overdispersion, a common occurrence in count data. These fundamental differences call for a closer examination and comparison of the methods and estimates obtained. The proposed dissertation aims to investigate the interpretability and understandability of the estimates produced as proposed by May (2004), examine if the three ES indices can be converted to a common metric to facilitate comparison of the ES estimates, document the benefits and challenges while implementing each method, and examine the performance of these ES methods under positive autocorrelation and overdispersion using Monte Carlo Simulation. Schmidt (2007), a published SCD study that examined the effect of Class-wide Function-related Interventions Teams (CW-FIT) on reducing the disruptive behavior of three first grade students using an ABAB design, was used to examine the interpretability and understandability of the estimates produced and whether the indices can be converted to a common metric. It consisted of 3 cases with 4 phases (ABAB) for each case. For the simulation study, 1000 datasets for each case were simulated using pre-specified data parameters (number of cases, number of data points within each phase of a case, and phase means) taken from Schmidt (2007) study and for various conditions of autocorrelation and overdispersion. A fully crossed factorial design with three autocorrelation (0.0, 0.2, 0.4) and four overdispersion (0.0001, 0.05, 0.1, 0.3) resulting in 12 simulation conditions for each case was used for the data generation purpose. All analyses were carried out in R software. Results indicate all three ES estimates are interpretable. LRR meets the understandability criteria, however both BRR and NLB require advance statistical knowledge to run the models. The three ES can be converted to a common metric because they are ratios of the mean count of the phases. Based on simulation, all the three methods produce almost unbiased estimates of the effect size under different data conditions, however the standard error is affected by autocorrelation and overdispersion. This dissertation can serve as a resource for other SCD researchers and applied practitioners to understand and interpret the different ES values from the LRR, NLB, and BRR methods and help them make better informed decision about which of the three ES indices to use in their own research study if there is presence of autocorrelation and overdispersion in their data. | |
Advisor | May, Henry | |
Degree | Ph.D. | |
Department | University of Delaware, School of Education | |
DOI | https://doi.org/10.58088/gk65-yk18 | |
Unique Identifier | 1417078419 | |
URL | https://udspace.udel.edu/handle/19716/33463 | |
Language | en | |
Publisher | University of Delaware | |
URI | https://login.udel.idm.oclc.org/login?url=https://www.proquest.com/dissertations-theses/comparison-three-effect-size-indices-count-based/docview/2867881251/se-2?accountid=10457 | |
Keywords | Count outcomes | |
Keywords | Effect size | |
Keywords | Single-Case Designs | |
Keywords | Bayesian effect | |
Keywords | First grade students | |
Title | A comparison of three effect size indices for count-based outcomes in Single-Case Design studies | |
Type | Thesis |