Statistics Seminar Series

Department of Mathematical Sciences
and
Center for Applied Mathematics and Statistics

New Jersey Institute of Technology


Spring 2010

 

All seminars are 4:00 - 5:00 p.m., in Cullimore Hall Room 611 (Math Conference Room) unless noted otherwise. Refreshments are usually served at 3:30 p.m., and talks start at 4:00 p.m. If you have any questions about a particular seminar, please contact the person hosting the speaker.

 

Date
Speaker and Title
Host

Thursday
January 21, 2010
4:00PM

Dr. Xiaodong Lin, Department of Management Science and Information Systems,  Rutgers University
Regularization for Stationary Time Series (abstract)

Wenge Guo

Thursday
January 28, 2010
4:00PM
Dr. Yujun Wu, Department of Biostatistics and Programming, Sanofis-Aventis Inc., Bridgewater, New Jersey
Fast FSR Variable Selection with Applications to Clinical Trials (abstract)

Sunil Dhar

Thursday
 February 4, 2010
4:00PM
Glen Laird, Novartis Pharmaceutical Corporation
Estimation with Overdose Control Implementation at Novartis Oncology (abstract)
Sundar Subramanian
Thursday
 February 11, 2010
4:00PM
Jon Kettenring, Drew University
Massive Datasets (abstract)
Ari Jain
Thursday
 February 18, 2010
Kupfrian Hall, Room 210 @ 4:00PM
Xiaolong Luo, Celgene Corporation
Estimation of Treatment Effect Following a Clinical Trial with Adaptive Design (abstract)

Wenge Guo

Thursday
 February 25, 2010
Rescheduled: 4/1/10
Dr. Chyi-Hung Hsu, Novartis Pharmaceuticals, East Hanover, NJ, USA
Evaluating Potential Benefits of Dose-exposure-response Modeling for Dose Finding (abstract)

Sunil Dhar

Thursday
 March 4, 2010
Cullimore Hall, Room 110 @ 4:00PM
Dr. Cuiling Wang, Albert Einstein College of Medicine
Correction of Bias from Non-random Missing Longitudinal Data Using Auxiliary Information (abstract)

Chung Chang

Thursday
 March 11, 2010
4:00PM
M.C. Bhattacharjee, New Jersey Institute of Technology
Are Class-L Distributions Really Aging? (abstract)

Sunil Dhar

Thursday
 March 25, 2010
Cullimore Hall, Room 110 @ 4:00PM
Xiaohui Luo, Forest Research Institute, Jersey City, NJ, USA
Estimation of Treatment Difference in Proportions in Clinical Trials with Blinded Sample Size Re-estimation (abstract)
Sundar Subramanian
Thursday
 April 1, 2010
Cullimore Hall, Room 110 @ 4:00PM
Dr. Chyi-Hung Hsu, Novartis Pharmaceuticals, East Hanover, NJ, USA
Evaluating Potential Benefits of Dose-exposure-response Modeling for Dose Finding (abstract)

Sunil Dhar

Thursday
 April 8, 2010
4:00PM
Mani Lakshminarayanan, Investigative Research, Late Development Statistics, Merck & Co. Inc.
Meaningful and Reproducible Conclusions in Clinical Trials: A Statistician's Perspective
(abstract)

Sunil Dhar

Thursday
 April 29, 2010
Cullimore Hall, Room 110 @ 4:00PM
Dr. Amarjot Kaur’s, Merck and Co. Inc., Rahway, NJ, USA
Nonproportional Hazards Assumption in Time-To-Event Data
(abstract)

Sunil Dhar

 

 

 

 

ABSTRACTS

Regularization for Stationary Time Series:

The past decade has seen a rapid development of regularization techniques such as ridge regression, LASSO, SCAD, LARS and their extensions. However, these techniques have been developed mainly for circumstances where the observations are independent. In practice, many classes of interesting problems such as financial time series involve dependent data. In this talk, we will first describe extensions of the results of penalized methods for independent data to stationary multivariate time series. Under mild regularity conditions, our penalized estimators are sparse-consistent and possess well-known oracle properties. We demonstrate the utility of our results by developing a sparse version of the full factor GARCH model. Furthermore, we study the problem of regularization for AR(p) models with varying lags. With the appropriate choices of penalty functions, the resulting estimator achieves desired asymptotic consistency as well as automatic selection of important lag coefficients. Finally, we show the applicability of our theory and methods via real and simulated data.

Dr. Xiaodong Lin, Associate Professor, Department of Management Science and Information Systems, Rutgers University ~ January 21, 2010

Fast FSR Variable Selection with Applications to Clinical Trials:

A new version of the false selection rate variable selection method of Wu, Boos, and Stefanski (2007, Journal of the American Statistical Association 102, 235-243) is developed that requires no simulation. This version allows the tuning parameter in forward selection to be estimated simply by hand calculation from a summary table of output even for situations where the number of explanatory variables is larger than the sample size. Because of the computational simplicity, the method can be used in permutation tests and inside bagging loops for improved prediction. Illustration is provided in clinical trials for linear regression, logistic regression, and Cox proportional hazards regression.

Dr. Yujun Wu, Department of Biostatistics and Programming, Sanofis-Aventis Inc., Bridgewater, New Jersey 08807, U.S.A. ~  January 28, 2010

Estimation with Overdose Control Implementation at Novartis Oncology:

Various implementations at Novartis of Bayesian logistic regression (BLR) guided by the Escalation with Overdose Control (EWOC) criteria for dose escalation studies will be discussed. These models explicitly control the probability of overdosing throughout dose escalation (in the interest of patient safety) while maintaining reasonably good performance in the targeting of the maximum tolerated dose (MTD). Comparisons via simulation with the 3+3 and CRM/MCRM methods are reviewed. Pragmatic aspects of the BLR-EWOC including evaluation of new therapies as single-agent and/or in combination, multiple dosing schedules, flexible cohort sizes and incorporation of relevant covariates are presented. Some practical experiences with the model in clinical trials are discussed along with model extensions for potential use in future trials.

Glen Laird, Novartis Pharmaceutical Corporation ~ February 4, 2010

Massive Datasets:

Massive datasets are so labeled because of their size and complexity. They do not yield readily to standard statistical analyses. The resulting frustration has served as a spur to researchers to develop better tools. Some progress has been made, but the need for considerably more explains why this line of research remains a top priority. Interdisciplinary teamwork is at least as important as tools and can be the key to cracking the hard challenges that these datasets pose. This overview talk includes background information, examples, and statistical strategies to illustrate the state of the art. (Reference: Wiley Interdisciplinary Reviews: Computational Statistics, 2009, 25-32.)

Jon Kettenring, Statistics Professor, Drew University ~ February 11, 2010

Estimation of Treatment Effect Following a Clinical Trial with Adaptive Design:

In this paper, we introduce a new framework based on marked point process (MPP) to model clinical trial data flow in which the calendar time based process of the trial conduct and any modification of the study design including sample size, treatment plan, and/or study endpoints will be retained. This MPP model allow us to use methods of stochastic calculus for analyses of any adaptive trial data. As an example, we apply this method to a two stage drop-the-loser design and extend the work by Sampson and Sill (2005, 2008), Stallard and Friede (2008), Li, Wang, and Ouyang (2009) and others to non-parametric case. Furthermore, we derive a new procedure for the estimation of the treatment effect and complement results on the hypothesis testing that have been the primary focus in the recent literature.

Xiaolong Luo and S. Peter Ouyang, Celgene Corporation ~ February 18, 2010

Evaluating Potential Benefits of Dose-exposure-response Modeling for Dose Finding:

Dose-regimen selection for confirmatory trials and characterization of dose-response relationship are arguably among the most important and difficult tasks in clinical drug development. Inadequate dose-regimen selection is believed to be one of the key drivers of the high attrition rate in Phase III. Nowadays, drug concentrations are routinely measured in patients in clinical studies throughout the drug development process. This drug exposure information is often used to explain some of the response variability. It could also be used to improve characterization of dose-response, and consequently result in a better dose-regimen selection. A simulation study was undertaken to assess the potential value of dose-response characterization methods that utilize exposure data relative to methods that only require dose and response data.

Chyi-Hung Hsu, Novartis Pharmaceuticals, East Hanover, NJ, USA. Email: chyihung.hsu@novartis.com ~ Rescheduled 4/1/10

Correction of Bias from Non-random Missing Longitudinal Data Using Auxiliary Information:

Missing data are common in longitudinal studies due to drop-out, loss to follow-up, and death. Likelihood-based mixed effects models for longitudinal data give valid estimates when the data are missing at random (MAR). This assumption, however, is not testable without further information. In some studies, there is additional information available in the form of an auxiliary variable known to be correlated with the missing outcome of interest. Availability of such auxiliary information provides us with an opportunity to test the MAR assumption. If the MAR assumption is violated, such information can be utilized to reduce or eliminate bias when the missing data process depends on the unobserved outcome through the auxiliary information. We compare two methods of utilizing the auxiliary information: joint modeling of the outcome of interest and the auxiliary variable, and multiple imputation (MI). Simulation studies are performed to examine the two methods. The likelihood-based joint modeling approach is consistent and most efficient when correctly specified. However, mis-specification of the joint distribution can lead to biased results. MI is slightly less efficient than a correct joint modeling approach but more robust to model mis-specification, but a wrong imputation model can also result in bias. An example is presented from a dementia screening study.

Dr. Cuiling Wang, Albert Einstein College of Medicine ~ March 4, 2010 

Are ClassLáDistributions Really Aging?

ClassL life distributions, first introduced by Klefsj÷ (1983), the largest among the standard non-parametric classes familiar in the context of stochastic modeling of degradation in reliability theory - has been customarily considered as an aging class; an interpretation which was first questioned by Klar (2002). Motivated by the question: What are the best possible moment conditions on such distributions which would imply their exponentiality? Which we solve and thus provide a characterization of exponentials under a classL hypothesis; we discuss and explore some analytical properties of this non-parametric class which can be construed as a criticism of the prevalent aging interpretation of L. [Keywords and phrases: Stochastic modeling, Aging properties and classes, partial ordering, moment conditions.]

M.C. Bhattacharjee, New Jersey Institute of Technology ~ March 11, 2010 

Estimation of Treatment Difference in Proportions in Clinical Trials with Blinded Sample Size Re-estimation:

Key Words: bias correction, bootstrap, type I error, power, interim analysis, dummy stratification, flexible design.

Shih and Zhao (1997) proposed a design with a simple stratification strategy for clinical trials with binary outcomes to re-estimate the required sample size during the trial without unblinding the interim data. The na´ve point estimator for the between group difference in proportions is positively biased. Two methods are proposed to correct the bias in the na´ve estimator: i.e., conditional bias correction, and bootstrap bias correction. Simulation studies show that our proposed methods compare favorably with the other unbiased estimators based on fixed weights.

Xiaohui Luo, Forest, Research Institute, Jersey City, NJ, USA  ~ March 25, 2010 

Evaluating Potential Benefits of Dose-exposure-response Modeling for Dose Finding:

Dose-regimen selection for confirmatory trials and characterization of dose-response relationship are arguably among the most important and difficult tasks in clinical drug development. Inadequate dose-regimen selection is believed to be one of the key drivers of the high attrition rate in Phase III. Nowadays, drug concentrations are routinely measured in patients in clinical studies throughout the drug development process. This drug exposure information is often used to explain some of the response variability. It could also be used to improve characterization of dose-response, and consequently result in a better dose-regimen selection. A simulation study was undertaken to assess the potential value of dose-response characterization methods that utilize exposure data relative to methods that only require dose and response data.

Chyi-Hung Hsu, Novartis Pharmaceuticals, East Hanover, NJ, USA. Email: chyihung.hsu@novartis.com ~ April 1, 2010

Meaningful and Reproducible Conclusions in Clinical Trials: A Statistician's Perspective:

Randomized clinical trials play the fundamental role as gold standard in drug development for determining an intervention (drug, device etc) is efficacious and safe when administered to a group of volunteers who bear a specific disease or a condition that is of interest to the clinical trialists. Abundant literature exists in the design, conduct and analysis of randomized clinical trials that get used in the approval of new applications globally. In the recent years, two important areas that have received special attention from researchers/practitioners are: interim monitoring and control of false positives when the final decision depends on multiple inferential summaries. Adaptive designs provide a general framework (which includes interim monitoring) for clinical study designs that use accumulating data to decide on how to modify aspects of the study as it continues, without undermining the validity and integrity of the trial. Creating flexibility to modify trials during the trial period has become a necessity in the face of current industry average of late stage failure is around 40%. As a result, use of adaptive designs in clinical trials is receiving a lot of attention as it carries variety of inherent opportunities such as early estimation of efficacy or futility, verification of assumptions and flexibility to make the design simpler.

Similarly, the second issue that is getting a lot of attention is control of false positive rate in clinical trials where multiplicity is prevalent in all aspects including multiple comparisons, multiple endpoints, multiple time points and others. In addition to the classical of controlling family wise error rates (FWER), recent research has been dealing with concepts such as false discover rates (FDR) and more recently, optimal discovery procedure (ODP) as methods for dealing with simultaneous testing.

In this talk, basic concepts of adaptive designs and multiplicity issues in clinical trials will be discussed. More extensive discussion will also be presented to how best to make use of these two concepts to drawing meaningful conclusions at the completion of a clinical trial. Examples will be used in support of analytical discussions.

Key Words: Adaptive designs, Interim Monitoring, Family wise Error Rate, False Discover Rate, Simultaneous Testing.

Mani Lakshminarayanan, Investigative Research, Late Development Statistics, Merck & Co. Inc., Whitehouse Station, New Jersey, USA ~ April 8, 2010 

Nonproportional Hazards Assumption in Time-To-Event Data:

When conducting time-to-event analysis using Cox proportional hazards model, the hazard ratio is assumed to be constant with time. The assumption of proportional hazards is important for the validity of results from this model and for correct interpretation of data. The adverse impact of the qualitative interaction of a covariate with time on the conclusion is well documented in the literature. However, there is not much literature available regarding the robustness of conclusions from the Cox model when there is a moderate to small quantitative interaction. In this investigation, we'll explore how resilient the Cox model is to quantitative deviations from the proportionality assumption. The magnitude of the loss of efficiency for the score test based on the Cox model will be examined under different scenarios of departure from the proportional hazards assumption.

Dr. Amarjot Kaur’s, Merck and Co. Inc., Rahway, NJ 07064, USA ~ April 29, 2010