The innovative use of real-world data (RWD) can answer questions that cannot be addressed using data from randomized clinical trials (RCTs). While the sponsors of RCTs have a central database containing all individual...
详细信息
The innovative use of real-world data (RWD) can answer questions that cannot be addressed using data from randomized clinical trials (RCTs). While the sponsors of RCTs have a central database containing all individual patient data (IPD) collected from trials, analysts of RWD face a challenge: regulations on patient privacy make access to IPD from all regions logistically prohibitive. In this research, we propose a double inverse probability weighting (DIPW) approach for the analysis sponsor to estimate the population average treatment effect (PATE) for a target population without the need to access IPD. One probability weighting is for achieving comparable distributions in confounders across treatment groups;another probability weighting is for generalizing the result from a subpopulation of patients who have data on the endpoint to the whole target population. The likelihood expressions for propensity scores and the DIPW estimator of the PATE can be written to only rely on regional summary statistics that do not require IPD. Our approach hinges upon the positivity and conditional independency assumptions, prerequisites to most RWD analysis approaches. Simulations are conducted to compare the performances of the proposed method against a modified meta-analysis and a regular meta-analysis.
To design a phase III study with a final endpoint and calculate the required sample size for the desired probability of success, we need a good estimate of the treatment effect on the endpoint. It is prudent to fully ...
详细信息
To design a phase III study with a final endpoint and calculate the required sample size for the desired probability of success, we need a good estimate of the treatment effect on the endpoint. It is prudent to fully utilize all available information including the historical and phase II information of the treatment as well as external data of the other treatments. It is not uncommon that a phase II study may use a surrogate endpoint as the primary endpoint and has no or limited data for the final endpoint. On the other hand, external information from the other studies for the other treatments on the surrogate and final endpoints may be available to establish a relationship between the treatment effects on the two endpoints. Through this relationship, making full use of the surrogate information may enhance the estimate of the treatment effect on the final endpoint. In this research, we propose a bivariate Bayesian analysis approach to comprehensively deal with the problem. A dynamic borrowing approach is considered to regulate the amount of historical data and surrogate information borrowing based on the level of consistency. A much simpler frequentist method is also discussed. Simulations are conducted to compare the performances of different approaches. An example is used to illustrate the applications of the methods.
The ICH E14 guidance recommends the use of a time-matched baseline, while others recommend alternative baseline definitions including a day-averaged baseline. In this article we consider six models adjusting for basel...
详细信息
The ICH E14 guidance recommends the use of a time-matched baseline, while others recommend alternative baseline definitions including a day-averaged baseline. In this article we consider six models adjusting for baselines. We derive the explicit covariances and compare their power under various conditions. Simulation results are provided. We conclude that type I error rates are controlled. However, one model outperforms the others on statistical power under certain conditions. In general, the analysis of covariance (ANCOVA) model using a day-averaged baseline is preferred. If the time-matched baseline has to be used as per requests from regulatory agencies, the analysis by time point using ANCOVA model should be recommended.
To address the issue of a large placebo effect in certain therapeutic areas, rather than the application of the traditional gold standard parallel group placebo-controlled design, different versions of the sequential ...
详细信息
To address the issue of a large placebo effect in certain therapeutic areas, rather than the application of the traditional gold standard parallel group placebo-controlled design, different versions of the sequential parallel comparison design have been advocated. In general, the design consists of two consecutive stages and three treatment groups. Stage 1 placebo nonresponders potentially form a prespecified patient subgroup for formal between-treatment comparison at the final analysis. In this research, a version of the design is considered for a binary endpoint. To fully utilize all available data, a generalized weighted combination test is proposed in case placebo has a relatively small effect for some of the study endpoints. The weighted combination of the test based on stage 1 data and the test based on stage 2 data of stage 1 placebo nonresponders suggested in the literature uses only a part of the study data and is a special case of this generalized weighted combination test. A multiple imputation approach is outlined for handling missing not at random data. Simulation is conducted to evaluate the performances of the methods and a data example is employed to illustrate the applications of the methods.
If the purpose of a clinical study is not only to test the null hypothesis but also to estimate the magnitude of the treatment effect, the study design should ensure not only that the study will have adequate power bu...
详细信息
If the purpose of a clinical study is not only to test the null hypothesis but also to estimate the magnitude of the treatment effect, the study design should ensure not only that the study will have adequate power but also that it will enable the researcher to report the relevant parameters with an appropriate level of precision. This paper discusses the factors that control precision in survival studies and shows how a computer program may be used to address these issues. The program allows the user to systematically modify assumptions about the population (e.g. the magnitude of the hazard ratio or the attrition rate) and elements of the study design (e.g. sample size and trial duration), quickly identify the impact of these factors on the study's precision, and modify the study design accordingly. The program may also be used to compute power for a planned study, and confidence intervals for a completed study.
Five algorithms are described for imputing partially observed recurrent events modeled by a negative binomial process, or more generally by a mixed Poisson process when the mean function for the recurrent events is co...
详细信息
Five algorithms are described for imputing partially observed recurrent events modeled by a negative binomial process, or more generally by a mixed Poisson process when the mean function for the recurrent events is continuous over time. We also discuss how to perform the imputation when the mean function of the event process has jump discontinuities. The validity of these algorithms is assessed by simulations. These imputation algorithms are potentially very useful in the implementation of pattern mixture models, which have been popularly used as sensitivity analysis under the non-ignorability assumption in clinical trials. A chronic granulomatous disease trial is analyzed for illustrative purposes.
We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and e...
详细信息
We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and establish an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics. The proposed method allows the dispersion parameter to vary by treatment groups. The accuracy of these methods is assessed by simulations. It is demonstrated that ignoring the between-subject variation in the follow-up time by setting the follow-up time for all individuals to be the mean follow-up time may greatly underestimate the required size, resulting in underpowered studies. Methods are provided for back-calculating the dispersion parameter based on the published summary results.
Non-proportional hazards have been observed in many studies especially in immuno-oncology clinical trials. Traditional analysis using the combined approach with log-rank test as the significance test and Cox model for...
详细信息
Non-proportional hazards have been observed in many studies especially in immuno-oncology clinical trials. Traditional analysis using the combined approach with log-rank test as the significance test and Cox model for treatment effect estimation becomes questionable as this approach relies heavily on the proportional hazards assumption. Inspired by the MCP-Mod (multiple comparisons and modeling approach) that has been widely used in dose-finding studies, we propose a similar approach to handle non-proportional hazards. Using this approach, efficacy signal is first established by a max-combo test, after which hazard ratios across time will be estimated using a logically nested splines model. Simulations studies and real-data examples are used to illustrate the use of this approach.
Paradigm for new drug development has changed dramatically over the last decade. Even though new technology increases efficiency in many aspects, partially due to much more stringent regulatory requirements, it actual...
详细信息
Paradigm for new drug development has changed dramatically over the last decade. Even though new technology increases efficiency in many aspects, partially due to much more stringent regulatory requirements, it actually now takes longer and costs more to develop a new drug. To deal with challenge, some initiatives are taken by the pharmaceutical industry. These initiatives include exploring emerging markets, conducting global trials and building research and development centers in emerging markets to curb spending. It is particularly the current trend that major pharmaceutical companies offshore a part of their biostatistical support to China. In this paper, we first discuss the skill set for trial statisticians in the new era. We then elaborate on some of the approaches for acquiring statistical talent and capacity in general, particularly in emerging markets. We also make some recommendations on the use of the PDUFA strategy and collaborations among Industry, health authority and academia from emerging market statistical perspective. (C) 2013 Elsevier Inc. All rights reserved.
Multiple endpoints and historical data borrowing may be simultaneously incorporated for enhancing efficiency and speeding up the new drug development process in the pharmaceutical industry. O'Brien's test is a...
详细信息
Multiple endpoints and historical data borrowing may be simultaneously incorporated for enhancing efficiency and speeding up the new drug development process in the pharmaceutical industry. O'Brien's test is a widely used weighted combination test for multiplicity adjustment on multiple endpoints to control the overall type error rate in a weak sense. In this research, a modification on the O'Brien's test more specifically on the weights is considered for a trial with two primary endpoints to potentially increase power. The method can handle missing data in the current study and in the prior derivation for dynamic historical data borrowing. Simulations are conducted to compare the performances of different methods. A data example is used to illustrate the applications of the methods.
暂无评论