Monday, Aug 7: 2:00 PM - 3:50 PM
Topic-Contributed Paper Session
Metro Toronto Convention Centre
Mental Health Statistics Section
Section on Medical Devices and Diagnostics
Many meta-analyses analyzing a collection of N-of-1 trials adopt a multilevel framework with random study effects and intercepts. Such models include two variance components for the random effects. Previous studies on meta-analyses of RCTs have suggested that likelihood-based estimates of these variances may be biased by an amount that might depend on how the treatment variable is coded and estimated in the frequentist framework. Whether this bias carries over to a Bayesian formulation or to the application of N-of-1 trials is unknown. In an extensive simulation, we explore the performance of variance estimation (bias, mean squared error, coverage and precision) under a variety of different models (fixed and random intercepts, different codings of treatment, auto-correlations), sample sizes (numbers of trials and lengths of each trial), amounts of within and between-study variance and formulations of prior distributions when outcomes are continuous. We conclude that careful choice of model can improve the accuracy of variance estimation and that accuracy varies substantially depending on the underlying variation as well as the number and size of studies.
We present an application of certain superiority or equivalence trials that have a special cross-over design and are conducted with patient samples. Clinicians can use the information from these trials to improve treatment individualization in N-of-1 trials when improvement is possible, and investigators can determine whether individualization will be productive. The cross-over trials also allow solving the important problem of estimating the optimal number of treatment cycles in an N-of-1 trial. We follow a frequentist framework for the analysis of disease severities and treatment individual benefits based on regression models with random coefficients and measure the patient's disease severity using partial empirical Bayes (PEB). We measure the relative extent to which PEB improves the prediction error of post treatment disease severity by comparing its mean square error with that of the common treatment-without-individualization approach that prescribes only the treatment superior on average. The number of treatment cycles providing substantial relative improvement in prediction error, say 80%, is the optimal number. We illustrate with a crossover trial of hypertensive patients.